From jeevan.baretto at gmail.com Thu May 1 05:09:23 2008 From: jeevan.baretto at gmail.com (Jeevan Baretto) Date: Thu, 1 May 2008 14:39:23 +0530 Subject: [SciPy-user] Surrogate Model/Response Surface model Message-ID: <46f941590805010209w4fa9d82fgecc2029355be2e92@mail.gmail.com> Hi, I was looking for an optimization module in Scipy on Surrogate model ( http://en.wikipedia.org/wiki/Surrogate_model ) also known as Response Surface model. I couldn't find one. Can anyone help me out with this? Thanks, Jeevan IIT Bombay -------------- next part -------------- An HTML attachment was scrubbed... URL: From turian at gmail.com Thu May 1 15:54:13 2008 From: turian at gmail.com (Joseph Turian) Date: Thu, 1 May 2008 15:54:13 -0400 Subject: [SciPy-user] Iteration over scipy.sparse matrices? Message-ID: <4dacb2560805011254l6e31b849id9390f42e19d4454@mail.gmail.com> Is there a (storage-format agnostic) method for iterating over the elements of a sparse matrix? I don't care what order they come in. I just want to make sure that I can iterate over the matrix in time linear in nnz, and have the (row, col) and data for each non-zero entry. Thanks! Joseph -- Academic: http://www-etud.iro.umontreal.ca/~turian/ Business: http://www.metaoptimize.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dharhas.Pothina at twdb.state.tx.us Thu May 1 16:15:08 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Thu, 01 May 2008 15:15:08 -0500 Subject: [SciPy-user] Converting arrays to dates for plot_date() Message-ID: <4819DE7C0200009B0001218B@GWWEB.twdb.state.tx.us> Hi, I'm sure there is a simple way to do this that I'm missing. I'm coming form a matlab background and am still having trouble understanding the interactions between arrays and lists etc I have a file with time series data 1986 7 8 32.1 1986 7 9 42.5 1986 7 10 22.2 ... I've read this in using loadtxt to the arrays : year,month,day & data What I want to do is use matplotlibs plot_date() command to plot the data against the date. I've worked out that I can use something like datestr2num(['2006-01-01','2006-01-02']) to generate the list of dates for plot_date() but I can't work out how to convert year = array([2006.0,2007.0]) month = array([11,12]) day = array([1,2]) into the form datestring = array(['2006-11-1','2007-12-2']) I've also tried using the datetime() function but I've worked out that doesn't work with numpy arrays. any pointers/help would be greatly appreciated. thanks, - dharhas From robert.kern at gmail.com Thu May 1 16:20:11 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 1 May 2008 15:20:11 -0500 Subject: [SciPy-user] Surrogate Model/Response Surface model In-Reply-To: <46f941590805010209w4fa9d82fgecc2029355be2e92@mail.gmail.com> References: <46f941590805010209w4fa9d82fgecc2029355be2e92@mail.gmail.com> Message-ID: <3d375d730805011320i44b89fb7k820bc4142d1568b1@mail.gmail.com> On Thu, May 1, 2008 at 4:09 AM, Jeevan Baretto wrote: > Hi, > I was looking for an optimization module in Scipy on Surrogate model ( > http://en.wikipedia.org/wiki/Surrogate_model ) also known as Response > Surface model. I couldn't find one. Can anyone help me out with this? There is nothing in particular in scipy which forms the surrogate models except perhaps splines in scipy.interpolate if the dimensions are small. There is some Gaussian process (equivalent to kriging) code here: http://code.google.com/p/random-realizations/ Once you have fitted the surrogate model, you can make a function from it to pass to any of the optimization routines in scipy.optimize. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Thu May 1 16:27:10 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 1 May 2008 16:27:10 -0400 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <20080429192103.GD5130@phare.normalesup.org> <66A33FD3-37B2-47DE-BFD8-C59F462ABB40@tamu.edu> Message-ID: 2008/4/30 Rob Clewley : > Could someone please make the new interpolation classes into new-style > classes? And I don't know if it's considered a big deal, but for > future compatibility maybe the exception raising should be done in the > functional style: ValueError("message") rather than ValueError, > message ? Done. Plus the procedural versions are there. (Thirty-eight lines of docstring for two lines of function!) Anne From jdh2358 at gmail.com Thu May 1 16:49:49 2008 From: jdh2358 at gmail.com (John Hunter) Date: Thu, 1 May 2008 15:49:49 -0500 Subject: [SciPy-user] Converting arrays to dates for plot_date() In-Reply-To: <4819DE7C0200009B0001218B@GWWEB.twdb.state.tx.us> References: <4819DE7C0200009B0001218B@GWWEB.twdb.state.tx.us> Message-ID: <88e473830805011349r6e2d0f03k580e120d2b634f77@mail.gmail.com> On Thu, May 1, 2008 at 3:15 PM, Dharhas Pothina wrote: > to generate the list of dates for plot_date() > > but I can't work out how to convert > > year = array([2006.0,2007.0]) > month = array([11,12]) > day = array([1,2]) dates = [datetime.date(y,m,d) for y,m,d in zip(year, month, day)] plot(dates, data) # requires 0.91.2 or svn -- else use date2num and plot_date JDH From wnbell at gmail.com Thu May 1 17:32:11 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 1 May 2008 16:32:11 -0500 Subject: [SciPy-user] Iteration over scipy.sparse matrices? In-Reply-To: <4dacb2560805011254l6e31b849id9390f42e19d4454@mail.gmail.com> References: <4dacb2560805011254l6e31b849id9390f42e19d4454@mail.gmail.com> Message-ID: On Thu, May 1, 2008 at 2:54 PM, Joseph Turian wrote: > Is there a (storage-format agnostic) method for iterating over the elements > of a sparse matrix? > I don't care what order they come in. I just want to make sure that I can > iterate over the matrix in time linear in nnz, and have the (row, col) and > data for each non-zero entry. In the current SVN version of SciPy all sparse matrices may be converted to the "coordinate" format using the .tocoo() member function. Alternatively, one may pass any matrix (sparse or dense) to the coo_matrix constructor. Using the COO format makes iteration trivial: M = .... #sparse or dense matrix A = coo_matrix(M) for i,j,v in zip(A.row, A.col, A.data): print "row = %d, column = %d, value = %s" % (i,j,v) Some sparse matrices support a rowcol() method that does something similar without making a conversion. However, rowcol() will deprecated in the next release since it's much slower than doing a single .tocoo(). -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From wnbell at gmail.com Thu May 1 17:40:05 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 1 May 2008 16:40:05 -0500 Subject: [SciPy-user] Iteration over scipy.sparse matrices? In-Reply-To: References: <4dacb2560805011254l6e31b849id9390f42e19d4454@mail.gmail.com> Message-ID: On Thu, May 1, 2008 at 4:32 PM, Nathan Bell wrote: > for i,j,v in zip(A.row, A.col, A.data): > print "row = %d, column = %d, value = %s" % (i,j,v) I should mention that since A.row, A.col, and A.data are all numpy arrays you can often vectorize sparse computations with them. For instance, suppose we wanted to eliminate all entries of a coo_matrix A that are less than 5 and store the result in a matrix B: A = coo_matrix(....) mask = A.data < 5 B = coo_matrix( (data[mask],(row[mask],col[mask])), shape=A.shape) As another example, extract all the entries above the diagonal: A = coo_matrix(....) mask = A.col > A.row B = coo_matrix( (data[mask],(row[mask],col[mask])), shape=A.shape) Since conversions to and from the COO format are quite fast, you can use this approach to efficiently implement lots computations on sparse matrices. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From gnurser at googlemail.com Thu May 1 18:00:19 2008 From: gnurser at googlemail.com (George Nurser) Date: Thu, 1 May 2008 23:00:19 +0100 Subject: [SciPy-user] [Matplotlib-users] ttconv/truetype.h compile error using gcc-4.3 In-Reply-To: <481A0953.6030601@gmail.com> References: <481A0953.6030601@gmail.com> Message-ID: <1d1e6ea70805011500v3eb1d0e0s1f48232f22b955e2@mail.gmail.com> This may or may not be relevant, but I made up python/numpy/scipy compiled with gcc 4.3 on OS X 10.5 a month or so ago. Python and numpy seemed to work fine, but although scipy compiled ok, scipy failed its test suite. It built the c++ ext_module_with_include.so (though with a lot of compiler warnings), but gave a segmentation fault when it was called. I therefore assumed matplotlib would not work correctly since it has a lot of C++ extensions and went back to the standard gcc 4.0.1.... George Nurser. 2008/5/1 Xavier Gnata : > Hi, > > Using gcc-4.3 I get this error : > > tconv/truetype.h:50: error: ISO C++ forbids declaration of 'FILE' with > no type > > #include is missing. > > Xavier > > ------------------------------------------------------------------------- > This SF.net email is sponsored by the 2008 JavaOne(SM) Conference > Don't miss this year's exciting event. There's still time to save $100. > Use priority code J8TL2D2. > http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone > _______________________________________________ > Matplotlib-users mailing list > Matplotlib-users at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/matplotlib-users > From tjhnson at gmail.com Thu May 1 20:21:18 2008 From: tjhnson at gmail.com (Tom Johnson) Date: Thu, 1 May 2008 17:21:18 -0700 Subject: [SciPy-user] optimize, line fit of convex functions Message-ID: I'd like to extrapolate some points from a fitted line. Given that true line is guaranteed to be convex, are there preferred methods for fitting the data? From robert.kern at gmail.com Thu May 1 20:40:42 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 1 May 2008 19:40:42 -0500 Subject: [SciPy-user] optimize, line fit of convex functions In-Reply-To: References: Message-ID: <3d375d730805011740k62db32d3h757d871501d1c086@mail.gmail.com> On Thu, May 1, 2008 at 7:21 PM, Tom Johnson wrote: > I'd like to extrapolate some points from a fitted line. Given that > true line is guaranteed to be convex, are there preferred methods for > fitting the data? There are algorithms for "shape-preserving splines" which maintain certain features like convexity or monotonicity, but I don't think we have any of them implemented in scipy. Unfortunately, I only know of their existence, not any of the details, so I can't tell you how easy it would be to implement. But use those search terms to find the appropriate literature, and let us know if you implement something. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Dharhas.Pothina at twdb.state.tx.us Fri May 2 08:21:14 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Fri, 02 May 2008 07:21:14 -0500 Subject: [SciPy-user] Converting arrays to dates for plot_date() Message-ID: <481AC0EA0200009B000121AF@GWWEB.twdb.state.tx.us> Thank you John, I tried that and got the error message : TypeError: descriptor 'date' requires a 'datetime.datetime' object but received a 'numpy.float64' but using datetime() rather than datetime.date() seems to work. fyi. y'all have a great community here. I've asked two questions in the last few days and was answered quickly and correctly on both occasions. thanks, - dharhas >>> "John Hunter" 05/01/08 3:49 PM >>> On Thu, May 1, 2008 at 3:15 PM, Dharhas Pothina wrote: > to generate the list of dates for plot_date() > > but I can't work out how to convert > > year = array([2006.0,2007.0]) > month = array([11,12]) > day = array([1,2]) dates = [datetime.date(y,m,d) for y,m,d in zip(year, month, day)] plot(dates, data) # requires 0.91.2 or svn -- else use date2num and plot_date JDH _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From jdh2358 at gmail.com Fri May 2 09:15:26 2008 From: jdh2358 at gmail.com (John Hunter) Date: Fri, 2 May 2008 08:15:26 -0500 Subject: [SciPy-user] Converting arrays to dates for plot_date() In-Reply-To: <481AC0EA0200009B000121AF@GWWEB.twdb.state.tx.us> References: <481AC0EA0200009B000121AF@GWWEB.twdb.state.tx.us> Message-ID: <88e473830805020615n1199ddc6m958449c16c295a88@mail.gmail.com> On Fri, May 2, 2008 at 7:21 AM, Dharhas Pothina wrote: > Thank you John, I tried that and got the error message : > > TypeError: descriptor 'date' requires a 'datetime.datetime' object but received a 'numpy.float64' > > but using datetime() rather than datetime.date() seems to work. > I think the problem may be that you had floats rather than ints in your "year" array. For example, the following does work: In [34]: data = array([1,2]) In [35]: year = array([2006,2007]) In [36]: month = array([11,12]) In [37]: day = array([1,2]) In [38]: dates = [datetime.date(y,m,d) for y,m,d in zip(year, month, day)] In [39]: dates Out[39]: [datetime.date(2006, 11, 1), datetime.date(2007, 12, 2)] > fyi. y'all have a great community here. I've asked two questions in the last few days and was > answered quickly and correctly on both occasions. Glad to help. JDH From cohen at slac.stanford.edu Fri May 2 11:25:37 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Fri, 02 May 2008 17:25:37 +0200 Subject: [SciPy-user] computing Bayesian credible intervals Message-ID: <481B3271.1000006@slac.stanford.edu> Hello, The question is a bit more general than that : How can I use SciPy to compute the values a and b defined by : 1) integral from a to b of a pdf p(x) (or any function for that matter) = a given value q 2) p(a)=p(b) thanks in advance, Johann From Dharhas.Pothina at twdb.state.tx.us Fri May 2 12:11:04 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Fri, 02 May 2008 11:11:04 -0500 Subject: [SciPy-user] sorting timeseries data. Message-ID: <481AF6C80200009B000121D3@GWWEB.twdb.state.tx.us> Hi, I have some field data with unequal spacing and some duplicate values that I am trying to plot a spline through. It seems that the splrep and splev functions require a unique monotonically increasing series of values. Say I have an unordered dataset with some duplicate values like : seconds = array([1,3,2,5,4,1,6,9,8]) data = array([100, 101, 102, 103, 104, 105, 106, 107, 108]) I want to sort the data to be monotonically increasing by the variable seconds and filter out duplicate values (say by deleting the second occurrence). I've tried combining the arrays : a = array([seconds,data]) and then using sort and argsort with various options but instead of sorting by the first column it sorts *every* column. I've also tried a=zip(x,y) followed by sort() / unique() my final sorted array needs to look like newseconds = array([1,1,2,3,4,5,6,8,9]) newdata = array([100,105,102,101,104,103,106,108,107]) and then removing duplicates should look like finalseconds = array([1,2,3,4,5,6,8,9]) finaldata = array([100,102,101,104,103,106,108,107]) Any help is appreciated. thanks - dharhas From pgmdevlist at gmail.com Fri May 2 12:34:11 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 2 May 2008 12:34:11 -0400 Subject: [SciPy-user] sorting timeseries data. In-Reply-To: <481AF6C80200009B000121D3@GWWEB.twdb.state.tx.us> References: <481AF6C80200009B000121D3@GWWEB.twdb.state.tx.us> Message-ID: <200805021234.13055.pgmdevlist@gmail.com> On Friday 02 May 2008 12:11:04 Dharhas Pothina wrote: > I want to sort the data to be monotonically increasing by the variable > seconds and filter out duplicate values (say by deleting the second > occurrence). Dharhas, >>>idx = seconds.argsort() >>>sorted_seconds = seconds[idx] >>>sorted_data = data[idx] will do the trick. Look at the help for the argsort method if you need to use a specific sorting algorithm. 'mergesort' is stable and can be preferred. Then, you can try to find the duplicates that way: >>>diffs = numpy.ediff1d(sorted_seconds, to begin=1) >>>unq = (diffs!=0) >>>final_seconds = sorted_seconds.compress(unq) >>>final_data = sorted_data.compress(unq) In a side note, you may want to give scikits.timeseries a try: we develop this package specifically to handle time series (ie, series indexed in time). The sorting part would be automatic, and finding the duplicates is also quite easy. HIH From nmb at wartburg.edu Fri May 2 14:02:27 2008 From: nmb at wartburg.edu (Neil Martinsen-Burrell) Date: Fri, 2 May 2008 18:02:27 +0000 (UTC) Subject: [SciPy-user] computing Bayesian credible intervals References: <481B3271.1000006@slac.stanford.edu> Message-ID: Johann Cohen-Tanugi slac.stanford.edu> writes: > The question is a bit more general than that : How can I use SciPy to > compute the values a and b defined by : > 1) integral from a to b of a pdf p(x) (or any function for that matter) > = a given value q > 2) p(a)=p(b) Problem 1 is under-specified. You have one equation for two unknowns and generally will not be able to solve uniquely for both of the endpoints. It may be common in your field to require some sort of symmetry between a and b, e.g. a = mu + E, b = mu - E where mu is some fixed quantity and now you solve for E rather than a and b separately. I (perhaps naively) would do this in scipy using scipy.optimize.brentq such as form numpy import exp from scipy.optimize import brentq mu = 0 def density(x): return exp(-x) def integral_function(E, mu): return scipy.integrate(density, mu-E, mu+E) - q margin = brentq(integral_function, 0, 1000) a,b = mu - margin, mu+margin For problem 2, I'm not sure what the function p represents, but a suitable adaptation of the above should work. -Neil From cohen at slac.stanford.edu Fri May 2 14:09:40 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Fri, 02 May 2008 20:09:40 +0200 Subject: [SciPy-user] computing Bayesian credible intervals In-Reply-To: References: <481B3271.1000006@slac.stanford.edu> Message-ID: <481B58E4.1050603@slac.stanford.edu> hi Neil, thanks for your answer and sorry I was not clear enough. Of course I require the 2 conditions. 1) defines *a* credible interval if p is a posterior pdf; and 2) sets a constraint that for common situation yield *the* standard Bayesian credible interval. I will have a look at brentq, I do not know what it refers to. best, Johann Neil Martinsen-Burrell wrote: > Johann Cohen-Tanugi slac.stanford.edu> writes: > > >> The question is a bit more general than that : How can I use SciPy to >> compute the values a and b defined by : >> 1) integral from a to b of a pdf p(x) (or any function for that matter) >> = a given value q >> 2) p(a)=p(b) >> > > Problem 1 is under-specified. You have one equation for two unknowns and > generally will not be able to solve uniquely for both of the endpoints. It may > be common in your field to require some sort of symmetry between a and b, e.g. a > = mu + E, b = mu - E where mu is some fixed quantity and now you solve for E > rather than a and b separately. I (perhaps naively) would do this in scipy > using scipy.optimize.brentq such as > > form numpy import exp > from scipy.optimize import brentq > > mu = 0 > > def density(x): > return exp(-x) > > def integral_function(E, mu): > return scipy.integrate(density, mu-E, mu+E) - q > > margin = brentq(integral_function, 0, 1000) > a,b = mu - margin, mu+margin > > For problem 2, I'm not sure what the function p represents, but a suitable > adaptation of the above should work. > > -Neil > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Fri May 2 14:01:33 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 2 May 2008 13:01:33 -0500 Subject: [SciPy-user] Converting arrays to dates for plot_date() In-Reply-To: <88e473830805020615n1199ddc6m958449c16c295a88@mail.gmail.com> References: <481AC0EA0200009B000121AF@GWWEB.twdb.state.tx.us> <88e473830805020615n1199ddc6m958449c16c295a88@mail.gmail.com> Message-ID: <3d375d730805021101q2581514g78fe064a7d995148@mail.gmail.com> On Fri, May 2, 2008 at 8:15 AM, John Hunter wrote: > On Fri, May 2, 2008 at 7:21 AM, Dharhas Pothina > > wrote: > > > Thank you John, I tried that and got the error message : > > > > TypeError: descriptor 'date' requires a 'datetime.datetime' object but received a 'numpy.float64' > > > > but using datetime() rather than datetime.date() seems to work. > > I think the problem may be that you had floats rather than ints in > your "year" array. Actually, I think the problem is that you did import datetime and he did from datetime import datetime but neither of you actually showed the import statements you used. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nmb at wartburg.edu Fri May 2 16:04:09 2008 From: nmb at wartburg.edu (Neil Martinsen-Burrell) Date: Fri, 2 May 2008 20:04:09 +0000 (UTC) Subject: [SciPy-user] computing Bayesian credible intervals References: <481B3271.1000006@slac.stanford.edu> <481B58E4.1050603@slac.stanford.edu> Message-ID: Johann Cohen-Tanugi slac.stanford.edu> writes: > hi Neil, thanks for your answer and sorry I was not clear enough. Of > course I require the 2 conditions. 1) defines *a* credible interval if p > is a posterior pdf; and 2) sets a constraint that for common situation > yield *the* standard Bayesian credible interval. I will have a look at > brentq, I do not know what it refers to. scipy.optimize.brentq is Brent's method for finding a root of a given scalar equation. Since you are looking for two values, a and b, with two conditions, then Brent's method is not appropriate (barring some symmetry-based reduction to one variable). I like to use scipy.optimize.fsolve to find roots of multivariable equations, such as def solve_me(x): # x is an array of the values you are solving for a,b = x integral_error = quad(density, a , b) - q prob_difference = density(b) - density(a) return np.array([integral_error, prob_difference]) fsolve(solve_me, [0.0, 1.0]) # initial guess is a = 0, b = 1 From erik.tollerud at gmail.com Fri May 2 17:36:34 2008 From: erik.tollerud at gmail.com (Erik Tollerud) Date: Fri, 2 May 2008 14:36:34 -0700 Subject: [SciPy-user] scipy.stats rv objects from data In-Reply-To: <1209373351.6142.40.camel@mik> References: <1209373351.6142.40.camel@mik> Message-ID: On Mon, Apr 28, 2008 at 12:29 AM, St?fan van der Walt wrote: > That should be 1.0/len(x), otherwise all the probabilities are 0. > > Cheers > St?fan I had >>>from __future__ import division above when I actually tested this, so they aren't zero (although you're right they would be otherwise), but you're right that they would be if this was run as-is. I figured the pmf part out, though based on Michael's examples - the pmf SHOULD be zero everywhere other then exactly at one of the values... when I generate the cdf, it has non-zero values. On Mon, Apr 28, 2008 at 2:02 AM, Michael wrote: > There are at least 2 ways of using rv_discrete > > e.g. 2 ways to calculate the next element of a simple Markov Chain with > x(n+1)=Norm(0.5 x(n),1) > > from scipy.stats import rv_discrete > from numpy.random import multinomial > > x = 3 > n1 = stats.rv_continuous.rvs( stats.norm, 0.5*x, 1.0 )[0] > print n1 > n2 = stats.rv_discrete.rvs( stats.rv_discrete( name='sample', > values=([0,1,2],[3/10.,5/10.,2/10.])), 0.5*x, 1.0 )[0] > print n2 > print > sample = stats.rv_discrete( name='sample', > values=([0,1,2],[3/10.,5/10.,2/10.]) ).rvs( size=10 ) > print sample > > The multinomial distribution from numpy.random is somewhat faster (40 > times or so) but has a different idiom: > > SIZE = 100000 > VALUES = [0,1,2,3,4,5,6,7] > PROBS = [1/8.,1/8.,1/8.,1/8.,1/8.,1/8.,1/8.,1/8.] > > The idiom for rv_discrete is > rv_discrete( name='sample', values=(VALUES,PROBS) ) > > The idiom for numpy.multinomial is different; if memory serves, you get > frequencies as output instead of the actual values > multinomial( SIZE, PROBS ) > > >>> from numpy.random import multinomial > >>> multinomial(100,[ 0.2, 0.4, 0.1, 0.3 ]) > array([12, 44, 10, 34]) > >>> multinomial( 100, [0.2, 0.0, 0.8, 0.0] ) <-- don't do this > ... > >>> multinomial( 100, [0.2, 1e-16, 0.8, 1e-16] ) <-- or this > >>> multinomial( 100, [0.2-1e-16, 1e-16, 0.8-1e-16, 1e-16] ) <-- ok > array([21, 0, 79, 0]) > > the last one is ok since the probability adds up to 1... painful, but it > works As explained above, I think I got the discrete to work (spurred on by your simpler example) - but what's the utility of using the multinomial idiom over the rv_discrete syntax? Is it faster?. > Continuous v's discrete: i found this in ./stats/scstats.py > > from scipy import stats, r_ > from pylab import show, plot > import copy > > # SOURCE: ./stats/scstats.py > > SPREAD = 10 > > class cdf( object ): > """ Baseclass for the task of determining a sequence of numbers {vi} > which is distributed as a random variable X > """ > def integerDensityFunction( self ): > """ > Outputs an integer density function: xs (ints) and ys (probabilities) > which are the correspondence between the whole numbers on the x axis > to the probabilities on the y axis, according to a normal distribution. > """ > opt = [] > for i in r_[-SPREAD:SPREAD:100j]: # 2-tailed test (?) > opt.append(( i, stats.norm.cdf(i) )) # ( int, P(int) ) > return zip(*opt) # [ (int...), (P...) ] > > def display( self ): > xs, ys = self.integerDensityFunction() > plot( xs, ys ) > show() > > if __name__=='__main__': > d = cdf() > d.display() I don't really understand what you're suggesting, but anyway, I don't see a stats/scstats.py file in the scipy directory (at least in 0.6)... > Continuous: i can only suggest using rv_continuous > > stats.rv_continuous.rvs( stats.norm, 0.5*x, 1.0 ).whatever > > .rvs( shape, loc, scale ) is the random variates > .pdf( x, shape, loc, scale ) is the probability density function which, > i think, is or should be genuinely continuous My interpretation of this is that it is using the normal distribution - I want a distribution that is a smoothed/interpolated version of the discrete distribution I generated above. I take this to mean there's no built-in utility to do this, so I just have to make my own - this seems like a useful thing for data analysis, though, so I may submit it later to be added to SVN. From peridot.faceted at gmail.com Fri May 2 18:37:50 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 2 May 2008 18:37:50 -0400 Subject: [SciPy-user] scipy.stats rv objects from data In-Reply-To: References: <1209373351.6142.40.camel@mik> Message-ID: 2008/5/2 Erik Tollerud : > My interpretation of this is that it is using the normal distribution > - I want a distribution that is a smoothed/interpolated version of the > discrete distribution I generated above. I take this to mean there's > no built-in utility to do this, so I just have to make my own - this > seems like a useful thing for data analysis, though, so I may submit > it later to be added to SVN. Well, this is tricky. You need to decide what it is you want from your distribution: the *way* in which you smooth your distribution function can make a tremendous difference to the results you get. If you're doing statistics, coping with this can be a real challenge. There is one family of techniques, called kernel density estimators, for "smoothing" distributions in a controlled and statistically well-understood fashion. They are implemented in scipy as well. They are used specifically for reconstructing a distribution when you have a collection of samples drawn from it; the resulting pdf is a sum of Gaussians, one centered at each sample. The width is automatically chosen based on the samples. You can also, of course, consruct an arbitrary pdf as a function, and then use numerical integration on it. (If you use splines it can be integrated even more efficiently, though I don't know that you can ensure that they are everywhere positive; I'd be inclined to interpolate the log instead, but then you lose easy integration.) I don't know whether scipy.stats allows you to construct a distribution object, with all its standard methods, from a given pdf, but this would be a vary useful feature. Anne From robince at gmail.com Sat May 3 07:07:19 2008 From: robince at gmail.com (Robin) Date: Sat, 3 May 2008 12:07:19 +0100 Subject: [SciPy-user] combine two sparse matrices Message-ID: Hi, I was wondering what the most (memory) efficient way of combining two sparse matrices would be. I am constructing a very large sparse matrix, but due to the temporary memory required to calculate the entries I am doing it in blocks, with the computation of each block done in a forked child process. This returns a sparse matrix of the same dimensions as the full one, but with a smaller number of entries. I would like to add the entries from the block result to the 'master' copy. I can be sure that there will be no overlap in the position of entries (ie no matrix position will be in both sides). What is the most memory efficient way of combining these? I noticed += isn't implemented, but it's not clear how that would work anyway. The best I have done so far is adding two lil_matices (the block is created as an lil-matrix for fancy indexing) A = A + Apartial, but as the master copy grows this means I think that I will need double the final memory requirement for A (to add the last block). Is there a better way of doing this? Also, what are the memory requirements for the conversions (.tocsc, .tocsr etc.)? Will that mean I need double the memory anyway? Thanks, Robin From robince at gmail.com Sat May 3 11:16:10 2008 From: robince at gmail.com (Robin) Date: Sat, 3 May 2008 16:16:10 +0100 Subject: [SciPy-user] combine two sparse matrices In-Reply-To: References: Message-ID: On Sat, May 3, 2008 at 12:07 PM, Robin wrote: > Hi, > > I was wondering what the most (memory) efficient way of combining two > sparse matrices would be. > > I am constructing a very large sparse matrix, but due to the temporary > memory required to calculate the entries I am doing it in blocks, with > the computation of each block done in a forked child process. This > returns a sparse matrix of the same dimensions as the full one, but > with a smaller number of entries. I would like to add the entries from > the block result to the 'master' copy. I can be sure that there will > be no overlap in the position of entries (ie no matrix position will > be in both sides). > > What is the most memory efficient way of combining these? I noticed += > isn't implemented, but it's not clear how that would work anyway. The > best I have done so far is adding two lil_matices (the block is > created as an lil-matrix for fancy indexing) A = A + Apartial, but as > the master copy grows this means I think that I will need double the > final memory requirement for A (to add the last block). Is there a > better way of doing this? > > Also, what are the memory requirements for the conversions (.tocsc, > .tocsr etc.)? Will that mean I need double the memory anyway? After reading some more about the different sparse matrix types I am now trying having both master and block matrices as dok, passing back dict(Apartial) since cPickle can't pickle dok matrices, and then doing A.update(Apartial). This seems to be a bit slower (which is OK) but also the memory used seems to be growing a little fast - it seems as though something is not being released (I do del Apartial after the update but I have a feeling it might be sticking around) and I'm worried the machine will fall over again before it's finished. I'd still appreciate some advice as to the best way to do this. I thought of directly appending to the lists in the lil_matrix, but I would then have to sort them again and I wasn't sure if the object array could take resizing like this. Robin From wnbell at gmail.com Sat May 3 11:24:12 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 3 May 2008 10:24:12 -0500 Subject: [SciPy-user] combine two sparse matrices In-Reply-To: References: Message-ID: On Sat, May 3, 2008 at 10:16 AM, Robin wrote: > > > > I was wondering what the most (memory) efficient way of combining two > > sparse matrices would be. > > > > I'd still appreciate some advice as to the best way to do this. I > thought of directly appending to the lists in the lil_matrix, but I > would then have to sort them again and I wasn't sure if the object > array could take resizing like this. If you're worried about speed or memory consumption, then you should avoid lil_matrix and dok_matrix. The fastest and most memory efficient approach uses coo_matrix: row = array([2,3,1,4]) col = array([4,2,3,1]) data = array([5,5,5,5]) A = coo_matrix( (data,(row,col)), shape=(5,5)).tocsr() The conversion to CSR forces a copy, however CSR and COO are so much more memory efficient than LIL/DOK that it shouldn't matter. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From robince at gmail.com Sat May 3 11:47:37 2008 From: robince at gmail.com (Robin) Date: Sat, 3 May 2008 16:47:37 +0100 Subject: [SciPy-user] combine two sparse matrices In-Reply-To: References: Message-ID: On Sat, May 3, 2008 at 4:24 PM, Nathan Bell wrote: > > If you're worried about speed or memory consumption, then you should > avoid lil_matrix and dok_matrix. > > The fastest and most memory efficient approach uses coo_matrix: > > row = array([2,3,1,4]) > col = array([4,2,3,1]) > data = array([5,5,5,5]) > A = coo_matrix( (data,(row,col)), shape=(5,5)).tocsr() > > The conversion to CSR forces a copy, however CSR and COO are so much > more memory efficient than LIL/DOK that it shouldn't matter. Thanks - I was just thinking about COO matrices. The problem is I have to build it up in parts - I know lists are more efficient to extend than arrays so I could store the data, row, col as lists and extend with each chunk - then I think I can construct the COO matrix from the list. I was worried that the lists would stay around though - even if deleted so then I wouldn't have enough memory to do the COO-CSC conversion. Perhaps I could add another layer of indirection with another fork() though to make sure the lists are cleaned up and just pass back the COO matrix - hopefully the COO matrix will pickle OK. Thanks, Robin From dwf at cs.toronto.edu Sat May 3 15:11:47 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sat, 3 May 2008 15:11:47 -0400 Subject: [SciPy-user] combine two sparse matrices In-Reply-To: References: Message-ID: <1F1617A6-53A3-4610-BD83-9452AC915A9F@cs.toronto.edu> On 3-May-08, at 11:47 AM, Robin wrote: > Perhaps I could add another layer of indirection with another fork() > though to make sure the lists are cleaned up and just pass back the > COO matrix - hopefully the COO matrix will pickle OK. You probably don't want to use pickle, both save and load will be pretty slow. I would call .tofile() on each of yourmatrix.row, yourmatrix.col and yourmatrix.data. Cheers, David From peridot.faceted at gmail.com Sat May 3 17:21:34 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sat, 3 May 2008 17:21:34 -0400 Subject: [SciPy-user] combine two sparse matrices In-Reply-To: References: Message-ID: 2008/5/3 Robin : > Hi, > > I was wondering what the most (memory) efficient way of combining two > sparse matrices would be. > > I am constructing a very large sparse matrix, but due to the temporary > memory required to calculate the entries I am doing it in blocks, with > the computation of each block done in a forked child process. This > returns a sparse matrix of the same dimensions as the full one, but > with a smaller number of entries. I would like to add the entries from > the block result to the 'master' copy. I can be sure that there will > be no overlap in the position of entries (ie no matrix position will > be in both sides). > > What is the most memory efficient way of combining these? I noticed += > isn't implemented, but it's not clear how that would work anyway. The > best I have done so far is adding two lil_matices (the block is > created as an lil-matrix for fancy indexing) A = A + Apartial, but as > the master copy grows this means I think that I will need double the > final memory requirement for A (to add the last block). Is there a > better way of doing this? > > Also, what are the memory requirements for the conversions (.tocsc, > .tocsr etc.)? Will that mean I need double the memory anyway? Sparse matrices, in any of the formats implemented in numpy/scipy, are quite inefficient when dealing with subblocks. Every single entry in the block must have at the least a four-byte index stored alongside it. This need not necessarily be a problem, but it seems to be for you. Perhaps you would be better off working with the blocks, represented as dense arrays, directly? Or if the direct approach is too cumbersome, you could write a quick(*) subclass that lets you index the block as if it were part of a larger matrix. (*) Actually I'm not sure just how quick it would be, but overriding __getitem__ and __setitem__ plus the usual initialization stuff ought to do the job. Anne From wnbell at gmail.com Sat May 3 19:14:01 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 3 May 2008 18:14:01 -0500 Subject: [SciPy-user] combine two sparse matrices In-Reply-To: References: Message-ID: On Sat, May 3, 2008 at 4:21 PM, Anne Archibald wrote: > > Sparse matrices, in any of the formats implemented in numpy/scipy, are > quite inefficient when dealing with subblocks. Every single entry in > the block must have at the least a four-byte index stored alongside > it. This need not necessarily be a problem, but it seems to be for > you. FWIW there's now a BSR (block CSR) format in scipy.sparse for matrices with dense sub-blocks. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From cohen at slac.stanford.edu Mon May 5 09:37:51 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Mon, 05 May 2008 15:37:51 +0200 Subject: [SciPy-user] computing Bayesian credible intervals : help on constrained optimisation schemes? In-Reply-To: References: <481B3271.1000006@slac.stanford.edu> <481B58E4.1050603@slac.stanford.edu> Message-ID: <481F0DAF.9090500@slac.stanford.edu> Hello, I am attaching my script. I followed Neil's suggestion but it fails to converge, due seemingly to issues with the fact that a and b must be positive, which I cannot enforce with fsolve, AFAIK. I am ready to dive into constrained schemes, especially in openOpt, but I was hoping for some suggestions beforehand as to which path to follow to solve this problem now. I remind that I am trying to find a and b so that : integral from a to b of p(x) = Q and p(a)=p(b) where Q is given (0.95 in my script) and p is a Poisson posterior pdf for ON/OFF source experiments. a,b, and x are source rates, and as such are positive. People will have recognized the computation of a Bayesian credible interval here!! thanks a lot in advance, Johann Neil Martinsen-Burrell wrote: > Johann Cohen-Tanugi slac.stanford.edu> writes: > > >> hi Neil, thanks for your answer and sorry I was not clear enough. Of >> course I require the 2 conditions. 1) defines *a* credible interval if p >> is a posterior pdf; and 2) sets a constraint that for common situation >> yield *the* standard Bayesian credible interval. I will have a look at >> brentq, I do not know what it refers to. >> > > scipy.optimize.brentq is Brent's method for finding a root of a given scalar > equation. Since you are looking for two values, a and b, with two conditions, > then Brent's method is not appropriate (barring some symmetry-based reduction to > one variable). I like to use scipy.optimize.fsolve to find roots of > multivariable equations, such as > > def solve_me(x): # x is an array of the values you are solving for > a,b = x > integral_error = quad(density, a , b) - q > prob_difference = density(b) - density(a) > return np.array([integral_error, prob_difference]) > > fsolve(solve_me, [0.0, 1.0]) # initial guess is a = 0, b = 1 > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- A non-text attachment was scrubbed... Name: testCredibleIntervals.py Type: text/x-python Size: 1265 bytes Desc: not available URL: From Dharhas.Pothina at twdb.state.tx.us Mon May 5 09:46:33 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Mon, 05 May 2008 08:46:33 -0500 Subject: [SciPy-user] sorting timeseries data. In-Reply-To: <200805021234.13055.pgmdevlist@gmail.com> References: <481AF6C80200009B000121D3@GWWEB.twdb.state.tx.us> <200805021234.13055.pgmdevlist@gmail.com> Message-ID: <481EC969.63BA.009B.0@twdb.state.tx.us> Thanks Pierre, I'll have a look at the scikits.timeseries package when I have some time. Is it part of scipy/numpy or do I have to download it separately? Another question with the duplicates. I have a dataset with multple datapoints on each day is there a simple way to take the maximum (or minimum,or mean) for each day and assign it to that day ie if my data looks like 1986 10 01 16.3 1986 10 01 22.9 1986 10 01 13.2 1986 10 02 24.3 1986 10 02 22.1 1986 10 03 19.8 1986 10 03 20.1 1986 10 03 23.4 ... take the max of each day to get : 1986 10 01 22.9 1986 10 02 24.3 1986 10 03 23.4 ... thanks - dharhas >>> Pierre GM 5/2/2008 11:34 AM >>> On Friday 02 May 2008 12:11:04 Dharhas Pothina wrote: > I want to sort the data to be monotonically increasing by the variable > seconds and filter out duplicate values (say by deleting the second > occurrence). Dharhas, >>>idx = seconds.argsort() >>>sorted_seconds = seconds[idx] >>>sorted_data = data[idx] will do the trick. Look at the help for the argsort method if you need to use a specific sorting algorithm. 'mergesort' is stable and can be preferred. Then, you can try to find the duplicates that way: >>>diffs = numpy.ediff1d(sorted_seconds, to begin=1) >>>unq = (diffs!=0) >>>final_seconds = sorted_seconds.compress(unq) >>>final_data = sorted_data.compress(unq) In a side note, you may want to give scikits.timeseries a try: we develop this package specifically to handle time series (ie, series indexed in time). The sorting part would be automatic, and finding the duplicates is also quite easy. HIH _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From dmitrey.kroshko at scipy.org Mon May 5 10:33:31 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Mon, 05 May 2008 17:33:31 +0300 Subject: [SciPy-user] computing Bayesian credible intervals : help on constrained optimisation schemes? In-Reply-To: <481F0DAF.9090500@slac.stanford.edu> References: <481B3271.1000006@slac.stanford.edu> <481B58E4.1050603@slac.stanford.edu> <481F0DAF.9090500@slac.stanford.edu> Message-ID: <481F1ABB.8090303@scipy.org> I have tried to solve your problem via openopt.NSLP solver nssolve & openopt.GLP solver galileo, both give maxResidual=0.885, that is far from desired zero, so your system (with required non-negative solution) seems to have no solution. As for using fsolve or other smooth-funcs intended tools, it (AFAIK) is senseless wrt non-smooth funcs, like your numerical integration yields. Regards, D. Johann Cohen-Tanugi wrote: > Hello, > I am attaching my script. I followed Neil's suggestion but it fails to > converge, due seemingly to issues with the fact that a and b must be > positive, which I cannot enforce with fsolve, AFAIK. > I am ready to dive into constrained schemes, especially in openOpt, > but I was hoping for some suggestions beforehand as to which path to > follow to solve this problem now. > I remind that I am trying to find a and b so that : > integral from a to b of p(x) = Q > and p(a)=p(b) > where Q is given (0.95 in my script) and p is a Poisson posterior pdf > for ON/OFF source experiments. a,b, and x are source rates, and as > such are positive. > People will have recognized the computation of a Bayesian credible > interval here!! > > thanks a lot in advance, > Johann > > Neil Martinsen-Burrell wrote: >> Johann Cohen-Tanugi slac.stanford.edu> writes: >> >> >>> hi Neil, thanks for your answer and sorry I was not clear enough. Of >>> course I require the 2 conditions. 1) defines *a* credible interval >>> if p is a posterior pdf; and 2) sets a constraint that for common >>> situation yield *the* standard Bayesian credible interval. I will >>> have a look at brentq, I do not know what it refers to. >>> >> >> scipy.optimize.brentq is Brent's method for finding a root of a given >> scalar >> equation. Since you are looking for two values, a and b, with two >> conditions, >> then Brent's method is not appropriate (barring some symmetry-based >> reduction to >> one variable). I like to use scipy.optimize.fsolve to find roots of >> multivariable equations, such as >> >> def solve_me(x): # x is an array of the values you are solving for >> a,b = x >> integral_error = quad(density, a , b) - q >> prob_difference = density(b) - density(a) >> return np.array([integral_error, prob_difference]) >> >> fsolve(solve_me, [0.0, 1.0]) # initial guess is a = 0, b = 1 >> >> >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From bsouthey at gmail.com Mon May 5 11:13:09 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 05 May 2008 10:13:09 -0500 Subject: [SciPy-user] computing Bayesian credible intervals : help on constrained optimisation schemes? In-Reply-To: <481F1ABB.8090303@scipy.org> References: <481B3271.1000006@slac.stanford.edu> <481B58E4.1050603@slac.stanford.edu> <481F0DAF.9090500@slac.stanford.edu> <481F1ABB.8090303@scipy.org> Message-ID: <481F2405.7050202@gmail.com> Hi, I think that you need an alternative approach here because: 1) The Poisson is a discrete distribution not continuous (so can not use the solvers). 2) The Poisson is also a skewed distribution so finding points such that p(a)=p(b) is impossible unless the Poisson parameter is sufficiently large. 3) Depending on the parameters, it is impossible to get exact probabilities like 5% or 95% from discrete distributions. These issues probably are reflected in the reported problems. Depending on your parameter and hence how accurate do you want for your interval, Normal/Gaussian can provide a suitable approximation. If you must stay with the Poisson, you need to solve it by brute force and allow for p(a) != p(b). Regards Bruce dmitrey wrote: > I have tried to solve your problem via openopt.NSLP solver nssolve & > openopt.GLP solver galileo, both give maxResidual=0.885, that is far > from desired zero, so your system (with required non-negative solution) > seems to have no solution. > > As for using fsolve or other smooth-funcs intended tools, it (AFAIK) is > senseless wrt non-smooth funcs, like your numerical integration yields. > > Regards, D. > > Johann Cohen-Tanugi wrote: > >> Hello, >> I am attaching my script. I followed Neil's suggestion but it fails to >> converge, due seemingly to issues with the fact that a and b must be >> positive, which I cannot enforce with fsolve, AFAIK. >> I am ready to dive into constrained schemes, especially in openOpt, >> but I was hoping for some suggestions beforehand as to which path to >> follow to solve this problem now. >> I remind that I am trying to find a and b so that : >> integral from a to b of p(x) = Q >> and p(a)=p(b) >> where Q is given (0.95 in my script) and p is a Poisson posterior pdf >> for ON/OFF source experiments. a,b, and x are source rates, and as >> such are positive. >> People will have recognized the computation of a Bayesian credible >> interval here!! >> >> thanks a lot in advance, >> Johann >> >> Neil Martinsen-Burrell wrote: >> >>> Johann Cohen-Tanugi slac.stanford.edu> writes: >>> >>> >>> >>>> hi Neil, thanks for your answer and sorry I was not clear enough. Of >>>> course I require the 2 conditions. 1) defines *a* credible interval >>>> if p is a posterior pdf; and 2) sets a constraint that for common >>>> situation yield *the* standard Bayesian credible interval. I will >>>> have a look at brentq, I do not know what it refers to. >>>> >>>> >>> scipy.optimize.brentq is Brent's method for finding a root of a given >>> scalar >>> equation. Since you are looking for two values, a and b, with two >>> conditions, >>> then Brent's method is not appropriate (barring some symmetry-based >>> reduction to >>> one variable). I like to use scipy.optimize.fsolve to find roots of >>> multivariable equations, such as >>> >>> def solve_me(x): # x is an array of the values you are solving for >>> a,b = x >>> integral_error = quad(density, a , b) - q >>> prob_difference = density(b) - density(a) >>> return np.array([integral_error, prob_difference]) >>> >>> fsolve(solve_me, [0.0, 1.0]) # initial guess is a = 0, b = 1 >>> >>> >>> >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From spmcinerney at hotmail.com Mon May 5 15:26:16 2008 From: spmcinerney at hotmail.com (Stephen McInerney) Date: Mon, 5 May 2008 12:26:16 -0700 Subject: [SciPy-user] Resume In-Reply-To: References: Message-ID: Dear SciPy, Not sure if this violates list etiquette, so I apologize in advance if it does. I'm an MSEE with 8 years Python experience across a variety of HW/SW roles. My full profile: www.linkedin.com/in/stephenmcinerney To contact me directly: spmcinerney at hotmail.com Regards, Stephen _________________________________________________________________ Stay in touch when you're away with Windows Live Messenger. http://www.windowslive.com/messenger/overview.html?ocid=TXT_TAGLM_WL_Refresh_messenger_052008 -------------- next part -------------- An HTML attachment was scrubbed... URL: From eads at soe.ucsc.edu Mon May 5 18:27:31 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Mon, 05 May 2008 15:27:31 -0700 Subject: [SciPy-user] computing Bayesian credible intervals In-Reply-To: <481B3271.1000006@slac.stanford.edu> References: <481B3271.1000006@slac.stanford.edu> Message-ID: <481F89D3.10403@soe.ucsc.edu> Johann Cohen-Tanugi wrote: > Hello, > The question is a bit more general than that : How can I use SciPy to > compute the values a and b defined by : > 1) integral from a to b of a pdf p(x) (or any function for that matter) > = a given value q > 2) p(a)=p(b) > > thanks in advance, > Johann Computing the area of the posterior over a credible interval involves integration. If your prior and likelihood are conjugate, it might be easier to use a conjugate distribution parameterized with the posterior hyperparameters and then compute CDF(b)-CDF(a). See http://en.wikipedia.org/wiki/Conjugate_prior for a list of conjugate priors with the hyperparameters worked out. Now, I guess what your really asking is how to do the inverse of that (question #1), i.e. how do you find the end points of the interval if you know the area? Try the inverse CDF or inverse survival function. In Scipy, some distributions have an isf member function. b=post.isf(q) a=0.0 Now onto question #2, let's assume your posterior distribution is symmetric, you can try the inverse CDF or the inverse survival function. For example, if q=0.7 (70%) and the posterior is symmetric, then L=[post.isf(0.85), post.isf(0.15)] a=min(L) b=max(L) Note, post.pdf(a) should be equal to post.pdf(b) because post is symmetric. Cheers, Damian Eads From roger.herikstad at gmail.com Mon May 5 23:26:26 2008 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Tue, 6 May 2008 11:26:26 +0800 Subject: [SciPy-user] common storage between matlab and python Message-ID: Hi list, Does anyone know of a matlab i/o interface beyond that of scipy.io.loadmat /savemat? I know these routines will handle scalars and vector, but what about matlab structures? My problem is that we have a substantial amount of code written in matlab that makes use of matlab's object oriented programming, storing the results of various calculations in objects. What I would like to do is to interface with these objects in python, that is read them from disk, do some calculations, and write them back in a format consistent with what the matlab object expects. So, for instance for the matlab object obj, the data is stored as obj.data.field1, obj.data.field2, etc. Is there a way for me to read the matlab file, do something to field1 and field2, in python, then store the modified fields back into the structure for matlab to read? Thanks! ~ Roger From fullung at gmail.com Tue May 6 02:10:15 2008 From: fullung at gmail.com (Albert Strasheim) Date: Tue, 6 May 2008 08:10:15 +0200 Subject: [SciPy-user] common storage between matlab and python In-Reply-To: References: Message-ID: <5eec5f300805052310i7cc474cdx5b9b42c91dd74acb@mail.gmail.com> Hello, On Tue, May 6, 2008 at 5:26 AM, Roger Herikstad wrote: > Hi list, > Does anyone know of a matlab i/o interface beyond that of > scipy.io.loadmat /savemat? I know these routines will handle scalars > and vector, but what about matlab structures? My problem is that we > have a substantial amount of code written in matlab that makes use of > matlab's object oriented programming, storing the results of various > calculations in objects. What I would like to do is to interface with > these objects in python, that is read them from disk, do some > calculations, and write them back in a format consistent with what the > matlab object expects. So, for instance for the matlab object obj, the > data is stored as obj.data.field1, obj.data.field2, etc. Is there a > way for me to read the matlab file, do something to field1 and field2, > in python, then store the modified fields back into the structure for > matlab to read? Thanks! If you don't find a solution specific to MATLAB, I'd recommend using HDF5. Using HDF5 allows me to move my "objects" seamlessly between MATLAB, Python and Java. You'll have to write a bit of MATLAB code to take a struct and inspect it using setfield, getfield and fieldnames, and then use hdf5write to write it to disk. obj might be stored in the HDF5 file obj.h5, with a "data" group, and two datasets ("field1" and "field2"). Then you can load it up in PyTables, which should give you an interface very similar to what you have in MATLAB. I think obj.data.field1 might just work thanks to the magic that is PyTables (where field1 could be a returned as a NumPy array). Hope this helps. Cheers, Albert From cohen at slac.stanford.edu Tue May 6 02:24:52 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Tue, 06 May 2008 08:24:52 +0200 Subject: [SciPy-user] computing Bayesian credible intervals In-Reply-To: <481F89D3.10403@soe.ucsc.edu> References: <481B3271.1000006@slac.stanford.edu> <481F89D3.10403@soe.ucsc.edu> Message-ID: <481FF9B4.50600@slac.stanford.edu> Damian, Bruce, Dmitrey, thank you very much for your answers. I feel I need to be more precise as to what I am doing. Look at eqs 5.13 and 5.14 of the following article http://www.astro.cornell.edu/staff/loredo/bayes/promise.pdf These 2 eqs define the posterior probability I am trying to work with. Bruce, it is a continuous function of the count rate s, though of course the underlying Poisson process is discrete. Damian, I will definitely look into your suggestions to understand better what is available in that respect, but I am not expecting this pdf to be implemented for cdf or isf computing. Dmitrey could you send me your calls to the openopt optimizers, that would be helpful for me to get a headstart on using your very nice framework and play with other possibilities then fsolve. As eqs 5.13 and 5.14 show, the resulting pdf is by construction normalized to 1. In my code, I tried integrate.quad and did not get a result close to 1. So I think that I have a numerical issue in my python formula, which would presumably also explain why it fails to converge (the pdf seems to go down to 0 at low s value much too quickly). thanks again, Johann Damian Eads wrote: > Johann Cohen-Tanugi wrote: > >> Hello, >> The question is a bit more general than that : How can I use SciPy to >> compute the values a and b defined by : >> 1) integral from a to b of a pdf p(x) (or any function for that matter) >> = a given value q >> 2) p(a)=p(b) >> >> thanks in advance, >> Johann >> > > Computing the area of the posterior over a credible interval involves > integration. If your prior and likelihood are conjugate, it might be > easier to use a conjugate distribution parameterized with the posterior > hyperparameters and then compute CDF(b)-CDF(a). See > http://en.wikipedia.org/wiki/Conjugate_prior for a list of conjugate > priors with the hyperparameters worked out. > > Now, I guess what your really asking is how to do the inverse of that > (question #1), i.e. how do you find the end points of the interval if > you know the area? Try the inverse CDF or inverse survival function. In > Scipy, some distributions have an isf member function. > b=post.isf(q) > a=0.0 > > Now onto question #2, let's assume your posterior distribution is > symmetric, you can try the inverse CDF or the inverse survival function. > For example, if q=0.7 (70%) and the posterior is symmetric, then > L=[post.isf(0.85), post.isf(0.15)] > a=min(L) > b=max(L) > Note, post.pdf(a) should be equal to post.pdf(b) because post is symmetric. > > Cheers, > > Damian Eads > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From cohen at slac.stanford.edu Sun May 4 15:41:38 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Sun, 04 May 2008 21:41:38 +0200 Subject: [SciPy-user] computing Bayesian credible intervals In-Reply-To: References: <481B3271.1000006@slac.stanford.edu> <481B58E4.1050603@slac.stanford.edu> Message-ID: <481E1172.9070104@slac.stanford.edu> hi Neil, thanks a lot. I needed to modify : integral_error = quad(density, a , b) - q into integral_error = quad(density, a , b)[0] - q as quad returns a tuple with the result and the estimated error More interesting, as a test I used : q=0.96 def density(s): if s<0.5: return 4*s else : return 4-4*s For which the solution is a=0.1, and b=0.9 If I keep results=optimize.fsolve(solve_me, [0, 1]) fsolve fails because it wanders in directions a<0 and b>1. With results=optimize.fsolve(solve_me, [0.5, 0.5]) fsolve converges without trouble.... Is there a simple way to get it to work with constraints like a>0 and b<1? I am afraid that the answer is no, if I understood well the context of another recent thread entitled "constrained optimization". Thanks a lot, Johann Neil Martinsen-Burrell wrote: > Johann Cohen-Tanugi slac.stanford.edu> writes: > > >> hi Neil, thanks for your answer and sorry I was not clear enough. Of >> course I require the 2 conditions. 1) defines *a* credible interval if p >> is a posterior pdf; and 2) sets a constraint that for common situation >> yield *the* standard Bayesian credible interval. I will have a look at >> brentq, I do not know what it refers to. >> > > scipy.optimize.brentq is Brent's method for finding a root of a given scalar > equation. Since you are looking for two values, a and b, with two conditions, > then Brent's method is not appropriate (barring some symmetry-based reduction to > one variable). I like to use scipy.optimize.fsolve to find roots of > multivariable equations, such as > > def solve_me(x): # x is an array of the values you are solving for > a,b = x > integral_error = quad(density, a , b) - q > prob_difference = density(b) - density(a) > return np.array([integral_error, prob_difference]) > > fsolve(solve_me, [0.0, 1.0]) # initial guess is a = 0, b = 1 > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From peridot.faceted at gmail.com Tue May 6 03:18:41 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 6 May 2008 03:18:41 -0400 Subject: [SciPy-user] computing Bayesian credible intervals In-Reply-To: <481E1172.9070104@slac.stanford.edu> References: <481B3271.1000006@slac.stanford.edu> <481B58E4.1050603@slac.stanford.edu> <481E1172.9070104@slac.stanford.edu> Message-ID: 2008/5/4 Johann Cohen-Tanugi : > integral_error = quad(density, a , b) - q > > into > > integral_error = quad(density, a , b)[0] - q > > as quad returns a tuple with the result and the estimated error This bites me every time I use quad. Still, it's valuable to check the reported error, though it's not always clear what to do with the results. > More interesting, as a test I used : > q=0.96 > > def density(s): > if s<0.5: > return 4*s > else : > return 4-4*s > > For which the solution is a=0.1, and b=0.9 > If I keep > results=optimize.fsolve(solve_me, [0, 1]) > fsolve fails because it wanders in directions a<0 and b>1. > With > results=optimize.fsolve(solve_me, [0.5, 0.5]) > fsolve converges without trouble.... Is there a simple way to get it to > work with constraints like a>0 and b<1? I am afraid that the answer is > no, if I understood well the context of another recent thread entitled > "constrained optimization". Multidimensional solving, as you are discovering, is a pain. If at all possible it's better to turn it into one-dimensional optimization. It can also be worth disposing of constraints - at least "open" constraints - using funny reparameterizations (log, for example, is great for ensuring that positive things stay positive) Do you really need p(a)=p(b)? I mean, is this the right supplementary condition to construct a credible interval? Would it be acceptable to choose instead p(xb)? This will probably be easier to work with, at least if you can get good numerical behaviour out of your PDF. Have you looked into writing down an analytical CDF? It looks moderately ghastly - all those s**n exp(k*s) - but possibly something that could be done with a bit of sweat. This would improve the performance of whatever solution method you use - the roundoff errors from quad() follow some irregular, discontinuous pattern. If you must use numerical integration, look into using a non-adaptive Gaussian quadrature, which won't have that particular problem. (There's one in scipy for more or less this reason.) Analytical mean, mode, or median could probably be made use of too. Or a different expansion for small s. A session with SAGE/MAPLE/Mathematica might yield something. If you do need p(a)=p(b), perhaps the way to turn the problem one-dimensional and reduce your headaches is to use a as the sole parameter and write b as a function of a using the fact that your distribution is single-peaked (so that p(x)-p(a)==0 has just two solutions, one of which you know about already; if you feel like being devious you could do root-finding on (p(x)-p(a))/(x-a) ). Good luck, Anne From cohen at slac.stanford.edu Tue May 6 03:45:58 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Tue, 06 May 2008 09:45:58 +0200 Subject: [SciPy-user] computing Bayesian credible intervals In-Reply-To: References: <481B3271.1000006@slac.stanford.edu> <481B58E4.1050603@slac.stanford.edu> <481E1172.9070104@slac.stanford.edu> Message-ID: <48200CB6.2000205@slac.stanford.edu> hi Anne, thanks! I will need some time to digest your suggestions and try to implement them. I think that the condition p(a)=p(b) where p is the actual pdf and not an integral of it, is the sufficient condition for unimodal distributions so that the credible interval is uniquely defined as the interval encompassing the mode. best, Johann Anne Archibald wrote: > 2008/5/4 Johann Cohen-Tanugi : > > >> integral_error = quad(density, a , b) - q >> >> into >> >> integral_error = quad(density, a , b)[0] - q >> >> as quad returns a tuple with the result and the estimated error >> > > This bites me every time I use quad. Still, it's valuable to check the > reported error, though it's not always clear what to do with the > results. > > >> More interesting, as a test I used : >> q=0.96 >> >> def density(s): >> if s<0.5: >> return 4*s >> else : >> return 4-4*s >> >> For which the solution is a=0.1, and b=0.9 >> If I keep >> results=optimize.fsolve(solve_me, [0, 1]) >> fsolve fails because it wanders in directions a<0 and b>1. >> With >> results=optimize.fsolve(solve_me, [0.5, 0.5]) >> fsolve converges without trouble.... Is there a simple way to get it to >> work with constraints like a>0 and b<1? I am afraid that the answer is >> no, if I understood well the context of another recent thread entitled >> "constrained optimization". >> > > Multidimensional solving, as you are discovering, is a pain. If at all > possible it's better to turn it into one-dimensional optimization. It > can also be worth disposing of constraints - at least "open" > constraints - using funny reparameterizations (log, for example, is > great for ensuring that positive things stay positive) > > Do you really need p(a)=p(b)? I mean, is this the right supplementary > condition to construct a credible interval? Would it be acceptable to > choose instead p(xb)? This will probably be easier to work > with, at least if you can get good numerical behaviour out of your > PDF. > > Have you looked into writing down an analytical CDF? It looks > moderately ghastly - all those s**n exp(k*s) - but possibly something > that could be done with a bit of sweat. This would improve the > performance of whatever solution method you use - the roundoff errors > from quad() follow some irregular, discontinuous pattern. If you must > use numerical integration, look into using a non-adaptive Gaussian > quadrature, which won't have that particular problem. (There's one in > scipy for more or less this reason.) Analytical mean, mode, or median > could probably be made use of too. Or a different expansion for small > s. A session with SAGE/MAPLE/Mathematica might yield something. > > If you do need p(a)=p(b), perhaps the way to turn the problem > one-dimensional and reduce your headaches is to use a as the sole > parameter and write b as a function of a using the fact that your > distribution is single-peaked (so that p(x)-p(a)==0 has just two > solutions, one of which you know about already; if you feel like being > devious you could do root-finding on (p(x)-p(a))/(x-a) ). > > Good luck, > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From roger.herikstad at gmail.com Tue May 6 04:42:06 2008 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Tue, 6 May 2008 16:42:06 +0800 Subject: [SciPy-user] common storage between matlab and python In-Reply-To: <5eec5f300805052310i7cc474cdx5b9b42c91dd74acb@mail.gmail.com> References: <5eec5f300805052310i7cc474cdx5b9b42c91dd74acb@mail.gmail.com> Message-ID: Hi, Thanks alot! I was looking at hdf5 as an alternative, and by your description I think it might suit my needs. I've been considering using PyTables for a while, but never had the initiative to do so, but I guess this is it... My one concern is to make this as invisible to pure matlab users as possible. For the time being, we are using both languages, and I was hoping there was a way for both python and matlab to coexist. I'll look into it.. Thanks again! ~ Roger On Tue, May 6, 2008 at 2:10 PM, Albert Strasheim wrote: > Hello, > > > > On Tue, May 6, 2008 at 5:26 AM, Roger Herikstad > wrote: > > Hi list, > > Does anyone know of a matlab i/o interface beyond that of > > scipy.io.loadmat /savemat? I know these routines will handle scalars > > and vector, but what about matlab structures? My problem is that we > > have a substantial amount of code written in matlab that makes use of > > matlab's object oriented programming, storing the results of various > > calculations in objects. What I would like to do is to interface with > > these objects in python, that is read them from disk, do some > > calculations, and write them back in a format consistent with what the > > matlab object expects. So, for instance for the matlab object obj, the > > data is stored as obj.data.field1, obj.data.field2, etc. Is there a > > way for me to read the matlab file, do something to field1 and field2, > > in python, then store the modified fields back into the structure for > > matlab to read? Thanks! > > If you don't find a solution specific to MATLAB, I'd recommend using HDF5. > > Using HDF5 allows me to move my "objects" seamlessly between MATLAB, > Python and Java. > > You'll have to write a bit of MATLAB code to take a struct and inspect > it using setfield, getfield and fieldnames, and then use hdf5write to > write it to disk. > > obj might be stored in the HDF5 file obj.h5, with a "data" group, and > two datasets ("field1" and "field2"). > > Then you can load it up in PyTables, which should give you an > interface very similar to what you have in MATLAB. I think > obj.data.field1 might just work thanks to the magic that is PyTables > (where field1 could be a returned as a NumPy array). > > Hope this helps. > > Cheers, > > Albert > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From peridot.faceted at gmail.com Tue May 6 05:09:44 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 6 May 2008 05:09:44 -0400 Subject: [SciPy-user] computing Bayesian credible intervals In-Reply-To: <48200CB6.2000205@slac.stanford.edu> References: <481B3271.1000006@slac.stanford.edu> <481B58E4.1050603@slac.stanford.edu> <481E1172.9070104@slac.stanford.edu> <48200CB6.2000205@slac.stanford.edu> Message-ID: 2008/5/6 Johann Cohen-Tanugi : > hi Anne, thanks! I will need some time to digest your suggestions and > try to implement them. I think that the condition p(a)=p(b) where p is > the actual pdf and not an integral of it, is the sufficient condition > for unimodal distributions so that the credible interval is uniquely > defined as the interval encompassing the mode. Yes, this should work; I would have gone with an interval that represented (say) 5th, 50th, and 95th percentiles, bracketing the median instead of the mode, but that might be peculiar if the distribution has large tails or you want to compare with a maximum-likelihood method (which returns the mode, I guess). My actual experience with Bayesian statistics is pretty limited; it's probably best to go with whatever's most standard (if the numerics can be made to behave). Anne From barrywark at gmail.com Tue May 6 12:46:28 2008 From: barrywark at gmail.com (Barry Wark) Date: Tue, 6 May 2008 09:46:28 -0700 Subject: [SciPy-user] common storage between matlab and python In-Reply-To: References: <5eec5f300805052310i7cc474cdx5b9b42c91dd74acb@mail.gmail.com> Message-ID: Have you considered mlabwrap (a scikits project)? It allows bridged calls into the matlab runtime (bridging sclaars, strings, numpy arrays, etc.; I don't remember if it bridges structs). Although it's, in some ways, a half-way solution, it would allow the issue to remain more transparent to matlab users at the expense of a bit of overhead for the python users. On Tue, May 6, 2008 at 1:42 AM, Roger Herikstad wrote: > Hi, > Thanks alot! I was looking at hdf5 as an alternative, and by your > description I think it might suit my needs. I've been considering > using PyTables for a while, but never had the initiative to do so, but > I guess this is it... My one concern is to make this as invisible to > pure matlab users as possible. For the time being, we are using both > languages, and I was hoping there was a way for both python and matlab > to coexist. I'll look into it.. Thanks again! > > ~ Roger > > > > On Tue, May 6, 2008 at 2:10 PM, Albert Strasheim wrote: > > Hello, > > > > > > > > On Tue, May 6, 2008 at 5:26 AM, Roger Herikstad > > wrote: > > > Hi list, > > > Does anyone know of a matlab i/o interface beyond that of > > > scipy.io.loadmat /savemat? I know these routines will handle scalars > > > and vector, but what about matlab structures? My problem is that we > > > have a substantial amount of code written in matlab that makes use of > > > matlab's object oriented programming, storing the results of various > > > calculations in objects. What I would like to do is to interface with > > > these objects in python, that is read them from disk, do some > > > calculations, and write them back in a format consistent with what the > > > matlab object expects. So, for instance for the matlab object obj, the > > > data is stored as obj.data.field1, obj.data.field2, etc. Is there a > > > way for me to read the matlab file, do something to field1 and field2, > > > in python, then store the modified fields back into the structure for > > > matlab to read? Thanks! > > > > If you don't find a solution specific to MATLAB, I'd recommend using HDF5. > > > > Using HDF5 allows me to move my "objects" seamlessly between MATLAB, > > Python and Java. > > > > You'll have to write a bit of MATLAB code to take a struct and inspect > > it using setfield, getfield and fieldnames, and then use hdf5write to > > write it to disk. > > > > obj might be stored in the HDF5 file obj.h5, with a "data" group, and > > two datasets ("field1" and "field2"). > > > > Then you can load it up in PyTables, which should give you an > > interface very similar to what you have in MATLAB. I think > > obj.data.field1 might just work thanks to the magic that is PyTables > > (where field1 could be a returned as a NumPy array). > > > > Hope this helps. > > > > Cheers, > > > > Albert > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From lists at benair.net Tue May 6 14:11:47 2008 From: lists at benair.net (Benedikt Koenig) Date: Tue, 06 May 2008 20:11:47 +0200 Subject: [SciPy-user] scipy.test() failure at 'generic filter 1' Message-ID: <1210097507.5857.23.camel@iagpc71.iag.uni-stuttgart.de> Dear all, scipy 0.6.0 built went without problems and scipy.test() fails with a seg fault. Numpy 1.0.4 installed and tested without problems. I am running centos4 with Python 2.3.4 on an i586. The last part of the output of python -c 'import numpy; import scipy; scipy.test(1, verbosity=2)' reads to: gaussian filter 1 ... ok gaussian filter 2 ... ok gaussian filter 3 ... ok gaussian filter 4 ... ok gaussian filter 5 ... ok gaussian filter 6 ... ok gaussian gradient magnitude filter 1 ... ok gaussian gradient magnitude filter 2 ... ok gaussian laplace filter 1 ... ok gaussian laplace filter 2 ... ok generation of a binary structure 1 ... ok generation of a binary structure 2 ... ok generation of a binary structure 3 ... ok generation of a binary structure 4 ... ok generic filter 1Speicherschutzverletzung ('Speicherschutzverletzung' means seg fault) The same problem was reported in http://thread.gmane.org/gmane.comp.python.scientific.user/13774 and Stefan van der Walt answered that this bug has been fixed. Can anyone tell me where to find this bug fix, or how to otherwise get rid of this error? Actually, I am not sure whether I really care about the 'generic filter' stuff but there seems to be some other problem with 0.6.0. A script that ran fine using scipy 0.5.2 gets stuck under my new 0.6.0 installation. No error is given but the line >>> data_cfd = fromfile(cfdfile, dtype=float, sep=' ').reshape(-1,19), which took a few seconds to process, is now using 100% cpu for several minutes before I kill it. No error or hint whatsoever what is going. Any idea what's happening there? Thanks, bene From robert.kern at gmail.com Tue May 6 15:55:49 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 6 May 2008 14:55:49 -0500 Subject: [SciPy-user] computing Bayesian credible intervals In-Reply-To: References: <481B3271.1000006@slac.stanford.edu> <481B58E4.1050603@slac.stanford.edu> <481E1172.9070104@slac.stanford.edu> Message-ID: <3d375d730805061255h767f071cj73a5949465bda5ab@mail.gmail.com> On Tue, May 6, 2008 at 2:18 AM, Anne Archibald wrote: > Do you really need p(a)=p(b)? I mean, is this the right supplementary > condition to construct a credible interval? Would it be acceptable to > choose instead p(xb)? This will probably be easier to work > with, at least if you can get good numerical behaviour out of your > PDF. It's one of the defining characteristics of the kind of credible interval Johann is looking for. "Bayesian credible interval" is a somewhat broad designation; it applies to pretty much any interval on a Baysian posterior distribution as long as the interval is selected according to some rule that statisticians agree has some kind of meaning. In practice, one of the most common such rules is to find the "Highest Posterior Density" (HPD) interval, where p(a)=p(b) and P(b)-P(a)=0.95 or some such chosen credibility level. Imagine the PDF being flooded with water up to its peak (let's assume unimodality for now). We gradually lower the level of the water such that for both points a and b where the water touches the PDF, p(a)=p(b). We lower the water until the integral under the "dry" peak is equal to 0.95. Then [a,b] is the HPD credible interval for that PDF at the 0.95 credibility level. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Tue May 6 15:57:20 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 6 May 2008 21:57:20 +0200 Subject: [SciPy-user] scipy.test() failure at 'generic filter 1' In-Reply-To: <1210097507.5857.23.camel@iagpc71.iag.uni-stuttgart.de> References: <1210097507.5857.23.camel@iagpc71.iag.uni-stuttgart.de> Message-ID: <9457e7c80805061257l164c177dre0b36459ceb31b38@mail.gmail.com> Hi Benedikt 2008/5/6 Benedikt Koenig : > scipy 0.6.0 built went without problems and scipy.test() fails with a > seg fault. Numpy 1.0.4 installed and tested without problems. I am > running centos4 with Python 2.3.4 on an i586. This problem had been fixed in SVN a while ago already. Regards St?fan From bsouthey at gmail.com Tue May 6 16:47:57 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 6 May 2008 15:47:57 -0500 Subject: [SciPy-user] computing Bayesian credible intervals In-Reply-To: <3d375d730805061255h767f071cj73a5949465bda5ab@mail.gmail.com> References: <481B3271.1000006@slac.stanford.edu> <481B58E4.1050603@slac.stanford.edu> <481E1172.9070104@slac.stanford.edu> <3d375d730805061255h767f071cj73a5949465bda5ab@mail.gmail.com> Message-ID: Hi, First what do you really mean by p(a) or p(b)? I would think that you want the definition that Anne defines them and as also implied by Robert's example. For this case the posterior is most likely gamma or proportional to one (it's been a while) which is asymmetric. (Note you will need to follow the derivation to determine if the constant has been ignored since it drops out of most calculations.) So defining the probabilities as p(xb) means that the points a and b are not generally going to be equidistant from the center of the distribution (mean, median and mode will give potentially different but correct answers) and p(xb). In this case you must be able to find the areas of the two tails. Obviously it is considerably easier and more flexible to use the actual distribution than a vague formula. Bruce On Tue, May 6, 2008 at 2:55 PM, Robert Kern wrote: > On Tue, May 6, 2008 at 2:18 AM, Anne Archibald > wrote: > > Do you really need p(a)=p(b)? I mean, is this the right supplementary > > condition to construct a credible interval? Would it be acceptable to > > choose instead p(xb)? This will probably be easier to work > > with, at least if you can get good numerical behaviour out of your > > PDF. > > It's one of the defining characteristics of the kind of credible > interval Johann is looking for. "Bayesian credible interval" is a > somewhat broad designation; it applies to pretty much any interval on > a Baysian posterior distribution as long as the interval is selected > according to some rule that statisticians agree has some kind of > meaning. > > In practice, one of the most common such rules is to find the "Highest > Posterior Density" (HPD) interval, where p(a)=p(b) and P(b)-P(a)=0.95 > or some such chosen credibility level. Imagine the PDF being flooded > with water up to its peak (let's assume unimodality for now). We > gradually lower the level of the water such that for both points a and > b where the water touches the PDF, p(a)=p(b). We lower the water until > the integral under the "dry" peak is equal to 0.95. Then [a,b] is the > HPD credible interval for that PDF at the 0.95 credibility level. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From cohen at slac.stanford.edu Tue May 6 16:44:53 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Tue, 06 May 2008 22:44:53 +0200 Subject: [SciPy-user] computing Bayesian credible intervals In-Reply-To: <3d375d730805061255h767f071cj73a5949465bda5ab@mail.gmail.com> References: <481B3271.1000006@slac.stanford.edu> <481B58E4.1050603@slac.stanford.edu> <481E1172.9070104@slac.stanford.edu> <3d375d730805061255h767f071cj73a5949465bda5ab@mail.gmail.com> Message-ID: <4820C345.9090809@slac.stanford.edu> mamma mia, shame on me... :( Of course in my code, it should have read : value+=coeff*(s*Ton)**(i-1)/sp.factorial(i-1) and not value=coeff*(s*Ton)**(i-1)/sp.factorial(i-1) Convergence is much better now with the simple fsolve! so much for public humiliation. Thanks all for bearing with me :) I am still interested in testing constrained solver, Dmitrey. Johann Robert Kern wrote: > On Tue, May 6, 2008 at 2:18 AM, Anne Archibald > wrote: > >> Do you really need p(a)=p(b)? I mean, is this the right supplementary >> condition to construct a credible interval? Would it be acceptable to >> choose instead p(xb)? This will probably be easier to work >> with, at least if you can get good numerical behaviour out of your >> PDF. >> > > It's one of the defining characteristics of the kind of credible > interval Johann is looking for. "Bayesian credible interval" is a > somewhat broad designation; it applies to pretty much any interval on > a Baysian posterior distribution as long as the interval is selected > according to some rule that statisticians agree has some kind of > meaning. > > In practice, one of the most common such rules is to find the "Highest > Posterior Density" (HPD) interval, where p(a)=p(b) and P(b)-P(a)=0.95 > or some such chosen credibility level. Imagine the PDF being flooded > with water up to its peak (let's assume unimodality for now). We > gradually lower the level of the water such that for both points a and > b where the water touches the PDF, p(a)=p(b). We lower the water until > the integral under the "dry" peak is equal to 0.95. Then [a,b] is the > HPD credible interval for that PDF at the 0.95 credibility level. > > From robert.kern at gmail.com Tue May 6 17:11:44 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 6 May 2008 16:11:44 -0500 Subject: [SciPy-user] computing Bayesian credible intervals In-Reply-To: References: <481B3271.1000006@slac.stanford.edu> <481B58E4.1050603@slac.stanford.edu> <481E1172.9070104@slac.stanford.edu> <3d375d730805061255h767f071cj73a5949465bda5ab@mail.gmail.com> Message-ID: <3d375d730805061411y11efa8b9m354066903f15d419@mail.gmail.com> On Tue, May 6, 2008 at 3:47 PM, Bruce Southey wrote: > Hi, > First what do you really mean by p(a) or p(b)? The values of the PDF at points a and b. These are *not* probabilities, but point values of the density function. Generally speaking, with an HPD interval, the probabilities under the two tails (what you are referring to with "p(xb)") are not equal. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pete.forman at westerngeco.com Wed May 7 07:32:25 2008 From: pete.forman at westerngeco.com (Pete Forman) Date: Wed, 07 May 2008 12:32:25 +0100 Subject: [SciPy-user] common storage between matlab and python References: <5eec5f300805052310i7cc474cdx5b9b42c91dd74acb@mail.gmail.com> Message-ID: <3aou9wom.fsf@wgmail2.gatwick.eur.slb.com> "Roger Herikstad" writes: > I was looking at hdf5 as an alternative, and by your description I > think it might suit my needs. MATLAB has read and write capabilities for HDF5 though I do not know how well your MATLAB classes are done. MATLAB's save function will take a -v7.3 option to use the HDF5-based version of the MATLAB MAT-file. I've not tested any of this, please let us know how you get on. Here is a relevant thread: http://www.mathworks.fr/matlabcentral/newsreader/view_thread/159346 -- Pete Forman -./\.- Disclaimer: This post is originated WesternGeco -./\.- by myself and does not represent pete.forman at westerngeco.com -./\.- the opinion of Schlumberger or http://petef.22web.net -./\.- WesternGeco. From roger.herikstad at gmail.com Wed May 7 10:47:17 2008 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Wed, 7 May 2008 22:47:17 +0800 Subject: [SciPy-user] common storage between matlab and python In-Reply-To: <3aou9wom.fsf@wgmail2.gatwick.eur.slb.com> References: <5eec5f300805052310i7cc474cdx5b9b42c91dd74acb@mail.gmail.com> <3aou9wom.fsf@wgmail2.gatwick.eur.slb.com> Message-ID: Hi, Thanks! I wasn't aware of this capability, but I think this solves my problem. All I need to do is add the -v7.3 option to the save function in my objects and all should be ok. Thanks again! ~ Roger On Wed, May 7, 2008 at 7:32 PM, Pete Forman wrote: > "Roger Herikstad" writes: > > > I was looking at hdf5 as an alternative, and by your description I > > think it might suit my needs. > > MATLAB has read and write capabilities for HDF5 though I do not know > how well your MATLAB classes are done. MATLAB's save function will > take a -v7.3 option to use the HDF5-based version of the MATLAB > MAT-file. I've not tested any of this, please let us know how you get > on. Here is a relevant thread: > > http://www.mathworks.fr/matlabcentral/newsreader/view_thread/159346 > -- > Pete Forman -./\.- Disclaimer: This post is originated > WesternGeco -./\.- by myself and does not represent > pete.forman at westerngeco.com -./\.- the opinion of Schlumberger or > http://petef.22web.net -./\.- WesternGeco. > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ndbecker2 at gmail.com Wed May 7 11:46:34 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 07 May 2008 11:46:34 -0400 Subject: [SciPy-user] common storage between matlab and python References: <5eec5f300805052310i7cc474cdx5b9b42c91dd74acb@mail.gmail.com> <3aou9wom.fsf@wgmail2.gatwick.eur.slb.com> Message-ID: I've been interested in looking at pytables, but the doc makes it look like there's a fair learning curve. Anyone have a _simple_ example, maybe a simple numpy vector, or a few vectors? From cimrman3 at ntc.zcu.cz Wed May 7 11:54:29 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 07 May 2008 17:54:29 +0200 Subject: [SciPy-user] common storage between matlab and python In-Reply-To: References: <5eec5f300805052310i7cc474cdx5b9b42c91dd74acb@mail.gmail.com> <3aou9wom.fsf@wgmail2.gatwick.eur.slb.com> Message-ID: <4821D0B5.6020005@ntc.zcu.cz> Neal Becker wrote: > I've been interested in looking at pytables, but the doc makes it look like > there's a fair learning curve. Anyone have a _simple_ example, maybe a > simple numpy vector, or a few vectors? I used to use the code below, not sure if it works with the current versions, but anyway, it can give you an idea how simple it is. best regards, r. def writeSparseMatrixHDF5( fileName, mtx, name = 'a sparse matrix' ): """Assume CSR/CSC.""" fd = pt.openFile( fileName, mode = "w", title = name ) try: info = fd.createGroup( '/', 'info' ) fd.createArray( info, 'dtype', mtx.dtype.str ) fd.createArray( info, 'shape', mtx.shape ) fd.createArray( info, 'format', mtx.format ) data = fd.createGroup( '/', 'data' ) fd.createArray( data, 'data', mtx.data ) fd.createArray( data, 'indptr', mtx.indptr ) fd.createArray( data, 'indices', mtx.indices ) except: print 'matrix must be in SciPy sparse CSR/CSC format!' print mtx.__repr__() raise fd.close() def readSparseMatrixHDF5( fileName, outputFormat = None ): import scipy.sparse as sp constructors = {'csr' : sp.csr_matrix, 'csc' : sp.csc_matrix} fd = pt.openFile( fileName, mode = "r" ) info = fd.root.info data = fd.root.data format = info.format.read() if not isinstance( format, str ): format = format[0] dtype = info.dtype.read() if not isinstance( dtype, str ): dtype = dtype[0] if outputFormat is None: constructor = constructors[format] else: constructor = constructors[outputFormat] if format in ['csc', 'csr']: mtx = constructor( (data.data.read(), data.indices.read(), data.indptr.read()), dims = info.shape.read(), dtype = dtype ) elif format == 'coo': mtx = constructor( (data.data.read(), nm.c_[data.rows.read(), data.cols.read()].T), dims = info.shape.read(), dtype = dtype ) else: print format raise ValueError fd.close() if outputFormat in ['csc', 'csr']: mtx.sort_indices() return mtx From josegomez at gmx.net Wed May 7 12:01:42 2008 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Wed, 07 May 2008 18:01:42 +0200 Subject: [SciPy-user] Linear regression with constraints Message-ID: <20080507160142.161530@gmx.net> Hi, I have a set of data (x_i,y_i), and would like to carry out a linear regression using least squares. Further, the slope and intercept are bound (they have to be between 0 and slope_max and 0 and slope_min, respectively). I have though of using one of the "easy to remember" :D optimization methods in scipy that allow boundaries (BFGS, for example). i can write the equation for the slope and intercept based on x_i and y_i, but I gather that I must provide a gradient estimate of the function at the point of evaluation. How does one go about this? Is this a 2-element array of grad(L) at m_eval, c_eval? Thanks! Jose -- 249 Spiele f?r nur 1 Preis. Die GMX Spieleflatrate schon ab 9,90 Euro. Neu: Asterix bei den Olympischen Spielen: http://flat.games.gmx.de From travis at enthought.com Wed May 7 12:38:24 2008 From: travis at enthought.com (Travis Vaught) Date: Wed, 7 May 2008 11:38:24 -0500 Subject: [SciPy-user] Embarrassing way to code (WooHoo!) Message-ID: Greetings, In case you haven't seen it, I thought I'd share an interesting post about SciPy...short on code, but I suspect a lot of folks here can relate. http://www.vetta.org/2008/05/scipy-the-embarrassing-way-to-code/ Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Wed May 7 13:36:42 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 07 May 2008 20:36:42 +0300 Subject: [SciPy-user] Linear regression with constraints In-Reply-To: <20080507160142.161530@gmx.net> References: <20080507160142.161530@gmx.net> Message-ID: <4821E8AA.9080003@scipy.org> As far as I had seen from the example (btw this one has analytical df) http://projects.scipy.org/scipy/scikits/browser/trunk/openopt/scikits/openopt/examples/llsp_2.py specialized solvers (like bvls) yield better results (objfunc value) than scipy l_bfgs_b and tnc. You could try using algencan, I had no checked the one (my OO<-> algencan connection was broken that time). It will hardly yield better objfunc value, but maybe it will have less time spent for some large-scale problems. Regards, D. Jose Luis Gomez Dans wrote: > Hi, > I have a set of data (x_i,y_i), and would like to carry out a linear regression using least squares. Further, the slope and intercept are bound (they have to be between 0 and slope_max and 0 and slope_min, respectively). > > I have though of using one of the "easy to remember" :D optimization methods in scipy that allow boundaries (BFGS, for example). i can write the equation for the slope and intercept based on x_i and y_i, but I gather that I must provide a gradient estimate of the function at the point of evaluation. How does one go about this? Is this a 2-element array of grad(L) at m_eval, c_eval? > > Thanks! > Jose > From bnuttall at uky.edu Wed May 7 16:31:35 2008 From: bnuttall at uky.edu (Nuttall, Brandon C) Date: Wed, 7 May 2008 16:31:35 -0400 Subject: [SciPy-user] Help with a 3D problem In-Reply-To: <20080507160142.161530@gmx.net> References: <20080507160142.161530@gmx.net> Message-ID: Folks, I need some help figuring out how to plot the trajectory of a path through 3D space. I know the trajectory passes through several points with (x, y, z) cooreinates: Point 1: (0,0,0) Point 2: (0,0,-z2) -- the 2 is a subscript for the variable "z", not the square of z Point 3: (A,B,-z3) Point 4: (x4,y4,-C) Point 5: (D,E,-C) A, B, C, D, and E are constants (given) I need to determine the variables z2, z3, x4, and y4 such that: 1) The path of the trajectory from Point 1 to Point 2 is a straight line 2) Points 2, 3, and 4 are on a circular path of constant radius 3) The path of the trajectory from Point 4 to Point 5 is a straight line 4) The straight line paths between points 1 and 2 and points 4 and 5 are tangents to the circular path described by points 2, 3, and 4 5) The straight line paths between points 1 and 2 and points 4 and 5 are perpendicular to each other. Now, what I want to be able to do is once the trajectory is determined, I want to be able to either: Given a distance, L, along the path of the trajectory from (0,0,0), determine the (x,y,z) coordinates of that point, or Given an (x,y,z) location on the trajectory, determine L (inverse of above) Thanks for any help or references. Brandon Nuttall From berthe.loic at gmail.com Thu May 8 08:27:10 2008 From: berthe.loic at gmail.com (LB) Date: Thu, 8 May 2008 05:27:10 -0700 (PDT) Subject: [SciPy-user] [SVN] scipy.test segmentation fault Message-ID: Hi, I recerived sigsegv when running scipy.test() with a recent SVN version. Here are some information on this : ------------------ Installation log ------------------------ svn co http://svn.scipy.org/svn/scipy/trunk scipy [ ... ] R?vision 4244 extraite. cd scipy $python setup.py build --fcompiler=gnu95 Warning: No configuration returned, assuming unavailable. mkl_info: libraries mkl,vml,guide not found in /home/loic/tmp/bluelagoon/lib/ NOT AVAILABLE fftw3_info: FOUND: libraries = ['fftw3'] library_dirs = ['/home/loic/tmp/bluelagoon/lib/'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/home/loic/tmp/bluelagoon/include/'] djbfft_info: NOT AVAILABLE blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /home/loic/tmp/bluelagoon/lib/ NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/home/loic/tmp/bluelagoon/lib/'] language = c customize GnuFCompiler Found executable /home/loic/tmp/bluelagoon/bin/gfortran gnu: no Fortran 90 compiler found Found executable /usr/bin/g77 gnu: no Fortran 90 compiler found customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize LaheyFCompiler Could not locate executable lf95 customize PGroupFCompiler Could not locate executable pgf90 customize AbsoftFCompiler customize NAGFCompiler customize VastFCompiler customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize CompaqFCompiler customize IntelItaniumFCompiler Could not locate executable efort Could not locate executable efc customize IntelEM64TFCompiler customize Gnu95FCompiler customize Gnu95FCompiler customize Gnu95FCompiler using config compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall - Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/home/loic/tmp/bluelagoon/lib/ -llapack - lf77blas -lcblas -latlas -o _configtest ATLAS version 3.8.0 built by loic on mardi 4 mars 2008, 08:35:34 (UTC +0100): UNAME : Linux bluelagoon 2.6.22-3-686 #1 SMP Sun Feb 10 20:20:49 UTC 2008 i686 GNU/Linux INSTFLG : -1 0 -a 1 ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_UNKNOWNx86 -DATL_CPUMHZ=2992 - DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_GAS_x8632 F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle CACHEEDGE: 2097152 F77 : gfortran, version GNU Fortran (GCC) 4.2.3 F77FLAGS : -O -fPIC -m32 SMC : gcc, version gcc (GCC) 4.2.3 SMCFLAGS : -O -fomit-frame-pointer -fPIC -m32 SKC : gcc, version gcc (GCC) 4.2.3 SKCFLAGS : -O -fomit-frame-pointer -fPIC -m32 success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/home/loic/tmp/bluelagoon/lib/'] language = c define_macros = [('ATLAS_INFO', '"\\"3.8.0\\""')] ATLAS version 3.8.0 lapack_opt_info: lapack_mkl_info: NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in /home/loic/tmp/bluelagoon/lib/ numpy.distutils.system_info.atlas_threads_info Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/home/loic/tmp/bluelagoon/lib/'] language = f77 customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize IntelFCompiler customize LaheyFCompiler customize PGroupFCompiler customize AbsoftFCompiler customize NAGFCompiler customize VastFCompiler customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize CompaqFCompiler customize IntelItaniumFCompiler customize IntelEM64TFCompiler customize Gnu95FCompiler customize Gnu95FCompiler customize Gnu95FCompiler using config compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall - Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/home/loic/tmp/bluelagoon/lib/ -llapack - llapack -lf77blas -lcblas -latlas -o _configtest ATLAS version 3.8.0 built by loic on mardi 4 mars 2008, 08:35:34 (UTC +0100): UNAME : Linux bluelagoon 2.6.22-3-686 #1 SMP Sun Feb 10 20:20:49 UTC 2008 i686 GNU/Linux INSTFLG : -1 0 -a 1 ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_UNKNOWNx86 -DATL_CPUMHZ=2992 - DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_GAS_x8632 F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle CACHEEDGE: 2097152 F77 : gfortran, version GNU Fortran (GCC) 4.2.3 F77FLAGS : -O -fPIC -m32 SMC : gcc, version gcc (GCC) 4.2.3 SMCFLAGS : -O -fomit-frame-pointer -fPIC -m32 SKC : gcc, version gcc (GCC) 4.2.3 SKCFLAGS : -O -fomit-frame-pointer -fPIC -m32 success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/home/loic/tmp/bluelagoon/lib/'] language = f77 define_macros = [('ATLAS_INFO', '"\\"3.8.0\\""')] ATLAS version 3.8.0 ATLAS version 3.8.0 umfpack_info: amd_info: FOUND: libraries = ['amd'] library_dirs = ['/home/loic/tmp/bluelagoon/lib/'] swig_opts = ['-I/home/loic/tmp/bluelagoon/include'] define_macros = [('SCIPY_AMD_H', None)] include_dirs = ['/home/loic/tmp/bluelagoon/include'] FOUND: libraries = ['umfpack', 'amd'] library_dirs = ['/home/loic/tmp/bluelagoon/lib/'] swig_opts = ['-I/home/loic/tmp/bluelagoon/include', '-I/home/loic/ tmp/bluelagoon/include'] define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] include_dirs = ['/home/loic/tmp/bluelagoon/include'] running build [...] Writing /home/loic/tmp/bluelagoon/lib/python2.5/site-packages/ scipy-0.7.0.dev4244-py2.5.egg-info ----------------- Tests ------------------------------------- % gdb python2.5 GNU gdb 6.7.1-debian Copyright (C) 2007 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "i486-linux-gnu"... Using host libthread_db library "/lib/i686/cmov/libthread_db.so.1". (gdb) run Starting program: /home/loic/tmp/bluelagoon/bin/python2.5 [Thread debugging using libthread_db enabled] Python 2.5.1 (r251:54863, Mar 4 2008, 18:22:16) [GCC 4.2.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. [New Thread 0xb7e078c0 (LWP 17809)] >>> import scipy >>> scipy.test() [...] ...................................................SSSSSSSSSSS..... Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xb7e078c0 (LWP 17809)] 0xb7102aac in next_char (fmt=0x8d0b068, literal=0) at ../.././ libgfortran/io/format.c:93 93 ../.././libgfortran/io/format.c: No such file or directory. in ../.././libgfortran/io/format.c (gdb) bt #0 0xb7102aac in next_char (fmt=0x8d0b068, literal=0) at ../.././ libgfortran/io/format.c:93 #1 0xb7102b20 in format_lex (fmt=0x8d0b068) at ../.././libgfortran/io/ format.c:183 #2 0xb710384b in *_gfortrani_parse_format (dtp=0xbfcf3460) at ../.././ libgfortran/io/format.c:987 #3 0xb710d0e8 in data_transfer_init (dtp=0xbfcf3460, read_flag=0) at ../.././libgfortran/io/transfer.c:1790 #4 0xb52790e5 in ivout_ () from /home/loic/tmp/bluelagoon/lib/ python2.5/site-packages/scipy/sparse/linalg/eigen/arpack/_arpack.so #5 0x00000000 in ?? () From gnurser at googlemail.com Thu May 8 09:58:11 2008 From: gnurser at googlemail.com (George Nurser) Date: Thu, 8 May 2008 14:58:11 +0100 Subject: [SciPy-user] failures=15, errors=7, scipy svn r4244 on OS X Leopard 10.5.2 Message-ID: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> I'm getting errors. This was with gfortran 4.3.0 from macPorts, standard gcc 4.0.1, numpy svn 5148. Quite a few errors were with matrices having the incorrect shape in test_matvec etc; also a couple of failures in fancy indexing. I wonder if recent work on matrices in numpy may be inconsistent with the scipy test routines now. I was also left with a wxPython window at the end with a spinning beachball that I had to force quit. George Nurser. -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy.out Type: application/octet-stream Size: 42465 bytes Desc: not available URL: From gh50 at st-andrews.ac.uk Thu May 8 10:00:51 2008 From: gh50 at st-andrews.ac.uk (Gregor Hagelueken) Date: Thu, 08 May 2008 15:00:51 +0100 Subject: [SciPy-user] lstsq Message-ID: <48230793.1080103@st-andrews.ac.uk> Hi, I would like to use lstsq to fit a curve y using a simple linear combination of three spectra (spectrum1-3 see below). Since the program below always crashes, I played around with it and found that it does not crash if I copy the contents of the array "spectrum2" to the array "spectrum3" (using spectrum3[0:36]=spectrum2[0:36]). But as soon as I change one of the numbers in array "spectrum3" it crashes again. What is the mistake? Thanks, Gregor from numpy import * from numpy.random import normal from pylab import * t = arange(205, 241, 1.0) #spectrum1 spectrum1=array([-25.35,-30.85,-34.6,-36.5,-36.75,-35.95,-35.2,-34.2,-33.6,-33.55,-33.8,-34.2,-34.2,-34.8,-35.4,-36.25,-36.6,-36.85,-36.55,-35.85,-35.05,-33.55,-31.7,-29.6,-27.15,-24.1,-21.5,-18.95,-16.4,-13.85,-11.45,-9.43,-7.76,-6.24,-4.81,-3.88]) #spectrum2 spectrum2=array([8.08,5.37,1.24,-3.07,-6.02,-8.84,-11.2,-13.05,-14.85,-17.75,-17.5,-18,-18.35,-18.65,-18.15,-17.45,-17.15,-16.3,-15,-13.15,-11.57,-9.76,-7.98,-5.79,-3.73,-2.37,-0.61,0.08,0.15,1.17,0.73,0.77,0.38,0.56,0.28,0.64]) #spectrum3 spectrum3=array([-25.8,-21.45,-18.0,-15.59,-12.82,-10.41,-8.41,-6.58,-4.99,-3.85,-2.62,-1.74,-1.02,-0.5,-0.13,0.17,0.03,-0.08,-0.32,-0.35,-0.48,-0.75,-1.12,-1.31,-1.5,-1.55,-1.17,-1.04,-0.72,-0.46,-0.14,0.21,0.01,0.23,0.03,-0.22]) #make curve y y=0.8*spectrum1+0.1*spectrum2+0.2*spectrum3+ normal(0.0, 1.0, len(t)) from numpy.linalg import lstsq Nparam = 3 A=zeros((len(t), Nparam),float) #helix A[:,0]=spectrum1 #beta A[:,1]=spectrum2 #coil A[:,2]=spectrum3 (p, residuals, rank, s) = lstsq(A,y) #A=p*A #fit = A[:,0]+A[:,1]#+A[:,2] #plot (t, y) #plot (t, fit) -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Thu May 8 10:39:52 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 08 May 2008 09:39:52 -0500 Subject: [SciPy-user] lstsq In-Reply-To: <48230793.1080103@st-andrews.ac.uk> References: <48230793.1080103@st-andrews.ac.uk> Message-ID: <482310B8.50702@gmail.com> Gregor Hagelueken wrote: > Hi, > > I would like to use lstsq to fit a curve y using a simple linear > combination of three spectra (spectrum1-3 see below). Since the > program below always crashes, I played around with it and found that > it does not crash if I copy the contents of the array "spectrum2" to > the array "spectrum3" (using spectrum3[0:36]=spectrum2[0:36]). But as > soon as I change one of the numbers in array "spectrum3" it crashes again. > > What is the mistake? > > Thanks, > Gregor > > from numpy import * > from numpy.random import normal > from pylab import * > > t = arange(205, 241, 1.0) > > #spectrum1 > spectrum1=array([-25.35,-30.85,-34.6,-36.5,-36.75,-35.95,-35.2,-34.2,-33.6,-33.55,-33.8,-34.2,-34.2,-34.8,-35.4,-36.25,-36.6,-36.85,-36.55,-35.85,-35.05,-33.55,-31.7,-29.6,-27.15,-24.1,-21.5,-18.95,-16.4,-13.85,-11.45,-9.43,-7.76,-6.24,-4.81,-3.88]) > #spectrum2 > spectrum2=array([8.08,5.37,1.24,-3.07,-6.02,-8.84,-11.2,-13.05,-14.85,-17.75,-17.5,-18,-18.35,-18.65,-18.15,-17.45,-17.15,-16.3,-15,-13.15,-11.57,-9.76,-7.98,-5.79,-3.73,-2.37,-0.61,0.08,0.15,1.17,0.73,0.77,0.38,0.56,0.28,0.64]) > #spectrum3 > spectrum3=array([-25.8,-21.45,-18.0,-15.59,-12.82,-10.41,-8.41,-6.58,-4.99,-3.85,-2.62,-1.74,-1.02,-0.5,-0.13,0.17,0.03,-0.08,-0.32,-0.35,-0.48,-0.75,-1.12,-1.31,-1.5,-1.55,-1.17,-1.04,-0.72,-0.46,-0.14,0.21,0.01,0.23,0.03,-0.22]) > > #make curve y > y=0.8*spectrum1+0.1*spectrum2+0.2*spectrum3+ normal(0.0, 1.0, len(t)) > > from numpy.linalg import lstsq > Nparam = 3 > A=zeros((len(t), Nparam),float) > > #helix > A[:,0]=spectrum1 > #beta > A[:,1]=spectrum2 > #coil > A[:,2]=spectrum3 > > (p, residuals, rank, s) = lstsq(A,y) > > #A=p*A > #fit = A[:,0]+A[:,1]#+A[:,2] > #plot (t, y) > #plot (t, fit) > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi, This works for me on Linux x86_64, Python 2.5 and '1.1.0.dev5133'. What are you using and what is the actual error message? Bruce From gh50 at st-andrews.ac.uk Thu May 8 11:06:10 2008 From: gh50 at st-andrews.ac.uk (Gregor Hagelueken) Date: Thu, 08 May 2008 16:06:10 +0100 Subject: [SciPy-user] lstsq In-Reply-To: <482310B8.50702@gmail.com> References: <48230793.1080103@st-andrews.ac.uk> <482310B8.50702@gmail.com> Message-ID: <482316E2.9070706@st-andrews.ac.uk> Bruce Southey wrote: > Gregor Hagelueken wrote: > >> Hi, >> >> I would like to use lstsq to fit a curve y using a simple linear >> combination of three spectra (spectrum1-3 see below). Since the >> program below always crashes, I played around with it and found that >> it does not crash if I copy the contents of the array "spectrum2" to >> the array "spectrum3" (using spectrum3[0:36]=spectrum2[0:36]). But as >> soon as I change one of the numbers in array "spectrum3" it crashes again. >> >> What is the mistake? >> >> Thanks, >> Gregor >> >> from numpy import * >> from numpy.random import normal >> from pylab import * >> >> t = arange(205, 241, 1.0) >> >> #spectrum1 >> spectrum1=array([-25.35,-30.85,-34.6,-36.5,-36.75,-35.95,-35.2,-34.2,-33.6,-33.55,-33.8,-34.2,-34.2,-34.8,-35.4,-36.25,-36.6,-36.85,-36.55,-35.85,-35.05,-33.55,-31.7,-29.6,-27.15,-24.1,-21.5,-18.95,-16.4,-13.85,-11.45,-9.43,-7.76,-6.24,-4.81,-3.88]) >> #spectrum2 >> spectrum2=array([8.08,5.37,1.24,-3.07,-6.02,-8.84,-11.2,-13.05,-14.85,-17.75,-17.5,-18,-18.35,-18.65,-18.15,-17.45,-17.15,-16.3,-15,-13.15,-11.57,-9.76,-7.98,-5.79,-3.73,-2.37,-0.61,0.08,0.15,1.17,0.73,0.77,0.38,0.56,0.28,0.64]) >> #spectrum3 >> spectrum3=array([-25.8,-21.45,-18.0,-15.59,-12.82,-10.41,-8.41,-6.58,-4.99,-3.85,-2.62,-1.74,-1.02,-0.5,-0.13,0.17,0.03,-0.08,-0.32,-0.35,-0.48,-0.75,-1.12,-1.31,-1.5,-1.55,-1.17,-1.04,-0.72,-0.46,-0.14,0.21,0.01,0.23,0.03,-0.22]) >> >> #make curve y >> y=0.8*spectrum1+0.1*spectrum2+0.2*spectrum3+ normal(0.0, 1.0, len(t)) >> >> from numpy.linalg import lstsq >> Nparam = 3 >> A=zeros((len(t), Nparam),float) >> >> #helix >> A[:,0]=spectrum1 >> #beta >> A[:,1]=spectrum2 >> #coil >> A[:,2]=spectrum3 >> >> (p, residuals, rank, s) = lstsq(A,y) >> >> #A=p*A >> #fit = A[:,0]+A[:,1]#+A[:,2] >> #plot (t, y) >> #plot (t, fit) >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > Hi, > This works for me on Linux x86_64, Python 2.5 and '1.1.0.dev5133'. > What are you using and what is the actual error message? > > Bruce > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi Bruce, I have a Vista PC (Intel Core2Duo) and Python 2.5 installed. I do not get any error message, it just hangs up, uses 50% of CPU and does not stop again. Sorry, but I am not sure what you mean by '1.1.0.dev5133'? Thanks, Gregor -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Thu May 8 12:46:22 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 8 May 2008 11:46:22 -0500 Subject: [SciPy-user] failures=15, errors=7, scipy svn r4244 on OS X Leopard 10.5.2 In-Reply-To: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> References: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> Message-ID: On Thu, May 8, 2008 at 8:58 AM, George Nurser wrote: > I'm getting errors. This was with gfortran 4.3.0 from macPorts, > standard gcc 4.0.1, numpy svn 5148. > > Quite a few errors were with matrices having the incorrect shape in > test_matvec etc; also a couple of failures in fancy indexing. > I wonder if recent work on matrices in numpy may be inconsistent with > the scipy test routines now. After updating to numpy svn 5148 I see the same errors. I don't follow the numpy mailing list so the matrix indexing change was a surprise for me. I find it a little obnoxious that A[1] on a numpy matrix now returns a rank 1 array. This isn't the sort of change I would have expected from software that bears the magical 1.x designation. The only thing worse than counterintuitive behavior is non-backwards compatible changes in a minor version release. I don't know when/if I'll change sparse to conform to the dense matrices. Numpy matrices seems to be a moving target these days and my spare time is not so abundant that I can write code for moving targets. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy_sparse.out Type: application/octet-stream Size: 8047 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Thu May 8 13:32:45 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 08 May 2008 19:32:45 +0200 Subject: [SciPy-user] lstsq In-Reply-To: <482316E2.9070706@st-andrews.ac.uk> References: <48230793.1080103@st-andrews.ac.uk> <482310B8.50702@gmail.com> <482316E2.9070706@st-andrews.ac.uk> Message-ID: On Thu, 08 May 2008 16:06:10 +0100 Gregor Hagelueken wrote: > Bruce Southey wrote: >> Gregor Hagelueken wrote: >> >>> Hi, >>> >>> I would like to use lstsq to fit a curve y using a >>>simple linear >>> combination of three spectra (spectrum1-3 see below). >>>Since the >>> program below always crashes, I played around with it >>>and found that >>> it does not crash if I copy the contents of the array >>>"spectrum2" to >>> the array "spectrum3" (using >>>spectrum3[0:36]=spectrum2[0:36]). But as >>> soon as I change one of the numbers in array "spectrum3" >>>it crashes again. >>> >>> What is the mistake? >>> >>> Thanks, >>> Gregor >>> >>> from numpy import * >>> from numpy.random import normal >>> from pylab import * >>> >>> t = arange(205, 241, 1.0) >>> >>> #spectrum1 >>> spectrum1=array([-25.35,-30.85,-34.6,-36.5,-36.75,-35.95,-35.2,-34.2,-33.6,-33.55,-33.8,-34.2,-34.2,-34.8,-35.4,-36.25,-36.6,-36.85,-36.55,-35.85,-35.05,-33.55,-31.7,-29.6,-27.15,-24.1,-21.5,-18.95,-16.4,-13.85,-11.45,-9.43,-7.76,-6.24,-4.81,-3.88]) >>> #spectrum2 >>> spectrum2=array([8.08,5.37,1.24,-3.07,-6.02,-8.84,-11.2,-13.05,-14.85,-17.75,-17.5,-18,-18.35,-18.65,-18.15,-17.45,-17.15,-16.3,-15,-13.15,-11.57,-9.76,-7.98,-5.79,-3.73,-2.37,-0.61,0.08,0.15,1.17,0.73,0.77,0.38,0.56,0.28,0.64]) >>> #spectrum3 >>> spectrum3=array([-25.8,-21.45,-18.0,-15.59,-12.82,-10.41,-8.41,-6.58,-4.99,-3.85,-2.62,-1.74,-1.02,-0.5,-0.13,0.17,0.03,-0.08,-0.32,-0.35,-0.48,-0.75,-1.12,-1.31,-1.5,-1.55,-1.17,-1.04,-0.72,-0.46,-0.14,0.21,0.01,0.23,0.03,-0.22]) >>> >>> #make curve y >>> y=0.8*spectrum1+0.1*spectrum2+0.2*spectrum3+ normal(0.0, >>>1.0, len(t)) >>> >>> from numpy.linalg import lstsq >>> Nparam = 3 >>> A=zeros((len(t), Nparam),float) >>> >>> #helix >>> A[:,0]=spectrum1 >>> #beta >>> A[:,1]=spectrum2 >>> #coil >>> A[:,2]=spectrum3 >>> >>> (p, residuals, rank, s) = lstsq(A,y) >>> >>> #A=p*A >>> #fit = A[:,0]+A[:,1]#+A[:,2] >>> #plot (t, y) >>> #plot (t, fit) >>> ------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >> Hi, >> This works for me on Linux x86_64, Python 2.5 and >>'1.1.0.dev5133'. >> What are you using and what is the actual error message? >> >> Bruce >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > Hi Bruce, > > I have a Vista PC (Intel Core2Duo) and Python 2.5 >installed. > > I do not get any error message, it just hangs up, uses >50% of CPU and does not stop again. > > Sorry, but I am not sure what you mean by >'1.1.0.dev5133'? Python 2.5 (r25:51908, Jan 10 2008, 18:01:52) [GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '1.2.0.dev5148' >>> Nils > > Thanks, > Gregor From aisaac at american.edu Thu May 8 13:56:44 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 8 May 2008 13:56:44 -0400 Subject: [SciPy-user] [SciPy-dev] failures=15, errors=7, scipy svn r4244 on OS X Leopard 10.5.2 In-Reply-To: References: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> Message-ID: On Thu, 8 May 2008, Nathan Bell apparently wrote: > A[1] on a numpy matrix now returns a rank 1 array. But A[1,:] still returns a matrix. Cheers, Alan Isaac From wnbell at gmail.com Thu May 8 14:26:36 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 8 May 2008 13:26:36 -0500 Subject: [SciPy-user] [SciPy-dev] failures=15, errors=7, scipy svn r4244 on OS X Leopard 10.5.2 In-Reply-To: References: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> Message-ID: On Thu, May 8, 2008 at 12:56 PM, Alan G Isaac wrote: > > But A[1,:] still returns a matrix. > I understand. However there are no rank 1 implementations of the matrices in scipy.sparse, so it was more natural for A[1] to return a 2d matrix. Furthermore, we now have the odd situation where indexing a matrix now returns a different type. We traded one sub-optimal behavior for another and broke existing code in the process. Oh, and iterating over the rows of a matrix works differently now, e.g. from numpy import * A = asmatrix(arange(9).reshape(3,3)) for row in A: print row.shape row.shape is now (3,) rather than (1,3) Oh, and these non-backwards compatible changes were made on a minor version release. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From wnbell at gmail.com Thu May 8 14:29:51 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 8 May 2008 13:29:51 -0500 Subject: [SciPy-user] failures=15, errors=7, scipy svn r4244 on OS X Leopard 10.5.2 In-Reply-To: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> References: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> Message-ID: On Thu, May 8, 2008 at 8:58 AM, George Nurser wrote: > > Quite a few errors were with matrices having the incorrect shape in > test_matvec etc; also a couple of failures in fancy indexing. > I wonder if recent work on matrices in numpy may be inconsistent with > the scipy test routines now. > I added some workarounds/fixes in r4250. Let me know if the problem still exists. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From aisaac at american.edu Thu May 8 14:51:08 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 8 May 2008 14:51:08 -0400 Subject: [SciPy-user] change in matrix behavior In-Reply-To: References: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> Message-ID: On Thu, 8 May 2008, Nathan Bell apparently wrote: > there are no rank 1 implementations of the > matrices in scipy.sparse, so it was more natural for A[1] to return a > 2d matrix. Furthermore, we now have the odd situation where indexing > a matrix now returns a different type. These issues were extensively discussed on the NumPy list. I hope you will read that discussion. > Oh, and iterating over the rows of a matrix works differently now, e.g. > from numpy import * > A = asmatrix(arange(9).reshape(3,3)) > for row in A: > print row.shape > row.shape is now (3,) rather than (1,3) After *extensive* discussion, most people felt that this is a good thing. Among those who did not agree or were not sure, raising an error in response to scalar indexing was popular. Otherwise, every function that iterates through arrays and expects dimensional reduction as it does so must special case matrices. This was seen as untenable. Query: How do you use iteration over a matrix? > Oh, and these non-backwards compatible changes were made > on a minor version release. I submitted a patch that would have issued a warning about scalar indexing rather than change the behavior immediately. However, as a user, I believe this change is both good and necessary. I do not have an opinion on the timing, as long as the change really takes place. My impression of the discussion was that 1.1 was being considered as almost the last opportunity for API changes ... Query: can you specify what needed behavior appears to have broken because of this change? Cheers, Alan Isaac From wnbell at gmail.com Thu May 8 15:12:51 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 8 May 2008 14:12:51 -0500 Subject: [SciPy-user] change in matrix behavior In-Reply-To: References: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> Message-ID: On Thu, May 8, 2008 at 1:51 PM, Alan G Isaac wrote: > > These issues were extensively discussed on the NumPy list. > I hope you will read that discussion. > I read a substantial part of it. Really, I think there are only two positions that make any sense: 1) deprecate numpy.matrix and leave numpy.matrix as-is 2) leave numpy.matrix as-is Making special matrices for row and column vectors only complicates matters. Personally, I favor position 1. > After *extensive* discussion, most people felt that this is > a good thing. Among those who did not agree or were not > sure, raising an error in response to scalar indexing was > popular. Otherwise, every function that iterates through > arrays and expects dimensional reduction as it does so must > special case matrices. This was seen as untenable. This is a case against the existence of matrices, not a case for changing matrix indexing. If matrices are to behave like ndarrays then they don't need to exist. > Query: How do you use iteration over a matrix? I was using it here: http://projects.scipy.org/scipy/scipy/changeset/4250 Clearly the work around was not difficult, but the change to numpy did break my code. > Query: can you specify what needed behavior appears to have > broken because of this change? Well, it should do without saying. Anyone who expected: 1) A[0] to be a 2d matrix 2) A[0] to be of type numpy.matrix 3) iteration over rows to return row matrices will now have broken code. I just don't see any argument for making matrices more like arrays that doesn't logically conclude with eliminating matrices altogether. The "artifact" of matrices that was recently eliminated was, IMO, one of the only reasons for them to exist. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From peridot.faceted at gmail.com Thu May 8 15:57:29 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 8 May 2008 21:57:29 +0200 Subject: [SciPy-user] change in matrix behavior In-Reply-To: References: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> Message-ID: 2008/5/8 Nathan Bell : > I just don't see any argument for making matrices more like arrays > that doesn't logically conclude with eliminating matrices altogether. > The "artifact" of matrices that was recently eliminated was, IMO, one > of the only reasons for them to exist. Well, the primary (only?) reason for matrices to exist is the multiplication operator. That was unaffected by the current change. I would say that most of the people in the discussion thought that matrices should behave as much like arrays as possible, with the sole difference being the * and ** operators. Of course, I use arrays instead of matrices, so I'm not particularly aware of any more subtle conveniences they offer. Perhaps you could contribute some use cases that demonstrate the utility of the very surprising old scalar indexing behaviour? Anne From wnbell at gmail.com Thu May 8 16:32:26 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 8 May 2008 15:32:26 -0500 Subject: [SciPy-user] change in matrix behavior In-Reply-To: References: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> Message-ID: On Thu, May 8, 2008 at 2:57 PM, Anne Archibald wrote: > > Well, the primary (only?) reason for matrices to exist is the > multiplication operator. That was unaffected by the current change. I > would say that most of the people in the discussion thought that > matrices should behave as much like arrays as possible, with the sole > difference being the * and ** operators. IIRC matrices also have attributes for inverses and overload \ for solving linear system. If matrix indexing is to be changed I don't know why we don't go the extra mile and deprecate matrices? They really don't serve much purpose and complicate life for both new and experienced users (albeit in different ways). > Of course, I use arrays instead of matrices, so I'm not particularly > aware of any more subtle conveniences they offer. Aside from supporting them in in scipy.sparse (e.g. A*x -> y, where x and y are matrices) I don't use matrices either. > Perhaps you could > contribute some use cases that demonstrate the utility of the very > surprising old scalar indexing behaviour? You misunderstand. I don't claim to make a case that the old scalar indexing was the correct/proper/intuitive. On the contrary, as many have pointed out it leads to rather unexpected behavior in some cases. OTOH the proposed "fix" introduces its own problem (A[1] is now an array rather than a matrix) and breaks backwards-compatibility. It's unprofessional to break backwards compatibility unless you have a *really good reason* and you are confident that the new method is *the right way to do it*. IMO the proposed changes don't meet either of these standards. Again, why don't we deprecate numpy.matrix? Clearly no one here uses it and it is a major pain to handle correctly in SciPy. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From aisaac at american.edu Thu May 8 17:46:33 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 8 May 2008 17:46:33 -0400 Subject: [SciPy-user] change in matrix behavior In-Reply-To: References: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> Message-ID: On Thu, 8 May 2008, Nathan Bell apparently wrote: > why don't we deprecate numpy.matrix? Clearly no one here > uses it and it is a major pain to handle correctly in > SciPy. I have a rather different perspective. I have switched to using Python in teaching. In my field (economics), students are often familiar with matrices, which are extensively used by economists and econometricians. The availability of NumPy matrices is a key reason I am able to switch to Python for these courses. I think economics is not unique, and looking forward, this kind of teaching use of NumPy matrices will become quite common. I also personally find NumPy matrices convenient when doing linear algebra. Cheers, Alan Isaac PS I think the current change will render matrices more usable for two reasons. First, it is natural that iteration over a matrix produce 1d object, for reasons you can find in the NumPy discussion threads. Second and related, matrices will be handled correctly by functions that rely on dimension reducing iterations. Finally, I expect something like ``rows`` and ``cols`` attributes to be added, which will add a natural, explicit, and symmetric iterative access to these submatrices. From lechtlr at yahoo.com Sat May 10 10:14:18 2008 From: lechtlr at yahoo.com (lechtlr) Date: Sat, 10 May 2008 07:14:18 -0700 (PDT) Subject: [SciPy-user] Dealing with Large Data Sets Message-ID: <357514.22753.qm@web57912.mail.re3.yahoo.com> I try to create an array called 'results' as provided in an example below. Is there a way to do this operation more efficiently when the number of 'data_x' array gets larger ? Also, I am looking for pointers to eliminate intermediate 'data_x' arrays, while creating 'results' in the following procedure. Thanks, Lex from numpy import * from numpy.random import * # what is the best way to create an array named 'results' below # when number of 'data_x' (i.e., x = 1, 2.....1000) is large. # Also nrows and ncolumns can go upto 10000 nrows = 5 ncolumns = 10 data_1 = zeros([nrows, ncolumns], 'd') data_2 = zeros([nrows, ncolumns], 'd') data_3 = zeros([nrows, ncolumns], 'd') # to store squared sum of each column from the arrays above results = zeros([3,ncolumns], 'd') # loop to store raw data from a numerical operation; # rand() is given as an example here for i in range(nrows): for j in range(ncolumns): data_1[i,j] = rand() data_2[i,j] = rand() data_3[i,j] = rand() # store squared sum of each column from data_x for k in range(ncolumns): results[0,k] = dot(data_1[:,k], data_1[:,k]) results[1,k] = dot(data_2[:,k], data_2[:,k]) results[2,k] = dot(data_3[:,k], data_3[:,k]) print results --------------------------------- Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Sat May 10 15:55:22 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sat, 10 May 2008 15:55:22 -0400 Subject: [SciPy-user] Dealing with Large Data Sets In-Reply-To: <357514.22753.qm@web57912.mail.re3.yahoo.com> References: <357514.22753.qm@web57912.mail.re3.yahoo.com> Message-ID: 2008/5/10 lechtlr : > I try to create an array called 'results' as provided in an example below. > Is there a way to do this operation more efficiently when the number of > 'data_x' array gets larger ? Also, I am looking for pointers to eliminate > intermediate 'data_x' arrays, while creating 'results' in the following > procedure. The rule of thumb is, if you want to do the same thing to many elements, just create an array of input values, then write the calculation as if you had a single input value. Most numpy functions act elementwise. > from numpy import * > from numpy.random import * > > # what is the best way to create an array named 'results' below > # when number of 'data_x' (i.e., x = 1, 2.....1000) is large. > # Also nrows and ncolumns can go upto 10000 > > nrows = 5 > ncolumns = 10 > > data_1 = zeros([nrows, ncolumns], 'd') > data_2 = zeros([nrows, ncolumns], 'd') > data_3 = zeros([nrows, ncolumns], 'd') > > # to store squared sum of each column from the arrays above > results = zeros([3,ncolumns], 'd') > > # loop to store raw data from a numerical operation; > # rand() is given as an example here > for i in range(nrows): > for j in range(ncolumns): > data_1[i,j] = rand() > data_2[i,j] = rand() > data_3[i,j] = rand() > > # store squared sum of each column from data_x > for k in range(ncolumns): > results[0,k] = dot(data_1[:,k], data_1[:,k]) > results[1,k] = dot(data_2[:,k], data_2[:,k]) > results[2,k] = dot(data_3[:,k], data_3[:,k]) > > print results import numpy as np data = np.random.rand(ndata,nrows,ncolumns) results = (data**2).sum(axis=0) or even results = (np.random.rand(ndata,nrows,ncolumns)**2).sum(axis=0) That last operation, which I have written as (data**2).sum(axis=0) is kind of an embarrassment; dot() or its cousin tensordot() would be more efficient, but they don't have a suitable "elementwise" implementation. Nevertheless, squaring and then summing gives the right answer. Anne From eads at soe.ucsc.edu Sat May 10 16:22:38 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Sat, 10 May 2008 13:22:38 -0700 Subject: [SciPy-user] Dealing with Large Data Sets In-Reply-To: <357514.22753.qm@web57912.mail.re3.yahoo.com> References: <357514.22753.qm@web57912.mail.re3.yahoo.com> Message-ID: <4826040E.6080604@soe.ucsc.edu> Hi Lex, lechtlr wrote: > I try to create an array called 'results' as provided in an example > below. Is there a way to do this operation more efficiently when the > number of 'data_x' array gets larger ? Also, I am looking for pointers > to eliminate intermediate 'data_x' arrays, while creating 'results' in > the following procedure. If you know a priori how many "data_x" arrays you need, you should allocate a 3-dimensional array. data_x = numpy.zeros((nrects, nrows, ncolumns)) I advise against using for loops over the elements of an array when data_x is large. Many operations within numpy and scipy have been carefully designed to work over large arrays. Developing an intuition on how to vectorize will be very helpful. The documentation on numpy array slicing and vectorization in numpy is extensive; Travis Oliphant's Guide to Numpy is an excellent reference on these topics. > from numpy import * > from numpy.random import * This is a bad practice so please avoid it. Either import the functions you need or import the packages using shorter names, e.g. import numpy as np > > # what is the best way to create an array named 'results' below > # when number of 'data_x' (i.e., x = 1, 2.....1000) is large. > # Also nrows and ncolumns can go upto 10000 > > nrows = 5 > ncolumns = 10 > > data_1 = zeros([nrows, ncolumns], 'd') > data_2 = zeros([nrows, ncolumns], 'd') > data_3 = zeros([nrows, ncolumns], 'd') > > # to store squared sum of each column from the arrays above > results = zeros([3,ncolumns], 'd') > > # loop to store raw data from a numerical operation; > # rand() is given as an example here > for i in range(nrows): > for j in range(ncolumns): > data_1[i,j] = rand() > data_2[i,j] = rand() > data_3[i,j] = rand() numpy.random.rand(m, n) generates an m by n array of doubles drawn from a U[0, 1] while numpy.random.rand(q, m, n) generates a q by m by n array. The rand function can take any number of integer arguments to generate a random array of arbitrary dimension. > # store squared sum of each column from data_x > for k in range(ncolumns): > results[0,k] = dot(data_1[:,k], data_1[:,k]) > results[1,k] = dot(data_2[:,k], data_2[:,k]) > results[2,k] = dot(data_3[:,k], data_3[:,k]) > > print results The code above can be reduced to import numpy as np data = np.random.rand(3, 5, 10) results = (data ** 2).sum(axis=2) which generates a 3 by 5 by 10 array. (data ** 2) squares each value in the array then .sum(axis=2) sums over each column generating a 3 by 5 array object, referenced by 'results'. Since you mentioned the a possibility you may deal with large data sets. In these situations, in-place vectorization can be very helpful. The code below makes use of = operators data **= 2.0 data.sum(axis = 2) which perform the operations in an in-place fashion. If data.sum(axis = 2) is large, preallocate an array to store the sum, # for summing over columns sum_result = numpy.zeros(data.shape[0:2]) There are quite a few of these in-place operators available in Python that numpy.ndarray defines. Try typing help(numpy.ndarray) for a full listing. Of course, it really depends what your definition of large is. I frequently work with gigabyte+ data sets so to me that's large and tools such as in-place vectorization, weave, mmap, C extensions are essential. However, to others, any data too large for a human to make sense of through cursory visual inspection is large. So, if that's what you mean by large, you should not see appreciable gains with the vectorization approaches I mentioned. Below are links describing array slicing and manipulation in detail (the first two are free), http://www.scipy.org/Tentative_NumPy_Tutorial http://www.scipy.org/NumPy_for_Matlab_Users http://www.tramy.us/ (Guide to Numpy) I hope this helps. Cheers, Damian From eads at soe.ucsc.edu Sat May 10 16:27:07 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Sat, 10 May 2008 13:27:07 -0700 Subject: [SciPy-user] Dealing with Large Data Sets In-Reply-To: <4826040E.6080604@soe.ucsc.edu> References: <357514.22753.qm@web57912.mail.re3.yahoo.com> <4826040E.6080604@soe.ucsc.edu> Message-ID: <4826051B.2090101@soe.ucsc.edu> Damian Eads wrote: > which perform the operations in an in-place fashion. If data.sum(axis = > 2) is large, preallocate an array to store the sum, > > # for summing over columns > sum_result = numpy.zeros(data.shape[0:2]) I meant to include data **= 2 np.sum(data, axis=2, out=sum_result) which does an in-place, element-wise exponentiate, sums over the columns, and stores the result in sum_result. Damian From peridot.faceted at gmail.com Sat May 10 16:39:04 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sat, 10 May 2008 16:39:04 -0400 Subject: [SciPy-user] Dealing with Large Data Sets In-Reply-To: <4826051B.2090101@soe.ucsc.edu> References: <357514.22753.qm@web57912.mail.re3.yahoo.com> <4826040E.6080604@soe.ucsc.edu> <4826051B.2090101@soe.ucsc.edu> Message-ID: 2008/5/10 Damian Eads : > Damian Eads wrote: > >> which perform the operations in an in-place fashion. If data.sum(axis = >> 2) is large, preallocate an array to store the sum, >> >> # for summing over columns >> sum_result = numpy.zeros(data.shape[0:2]) > > I meant to include > > data **= 2 > np.sum(data, axis=2, out=sum_result) > > which does an in-place, element-wise exponentiate, sums over the > columns, and stores the result in sum_result. What is the advantage to preallocating the result rather than letting sum() do the allocation? Ane From eads at soe.ucsc.edu Sun May 11 03:38:50 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Sun, 11 May 2008 00:38:50 -0700 Subject: [SciPy-user] Dealing with Large Data Sets In-Reply-To: References: <357514.22753.qm@web57912.mail.re3.yahoo.com> <4826040E.6080604@soe.ucsc.edu> <4826051B.2090101@soe.ucsc.edu> Message-ID: <4826A28A.30405@soe.ucsc.edu> Anne Archibald wrote: > 2008/5/10 Damian Eads : >> Damian Eads wrote: >> >>> which perform the operations in an in-place fashion. If data.sum(axis = >>> 2) is large, preallocate an array to store the sum, >>> >>> # for summing over columns >>> sum_result = numpy.zeros(data.shape[0:2]) >> I meant to include >> >> data **= 2 >> np.sum(data, axis=2, out=sum_result) >> >> which does an in-place, element-wise exponentiate, sums over the >> columns, and stores the result in sum_result. > > What is the advantage to preallocating the result rather than letting > sum() do the allocation? If the computation is repeated millions of times and the sum array is large (100s of MBs), then it is certainly advantageous to allocate the sum array once than for each computation. Damian From gael.varoquaux at normalesup.org Sun May 11 13:15:53 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 11 May 2008 19:15:53 +0200 Subject: [SciPy-user] common storage between matlab and python In-Reply-To: References: <5eec5f300805052310i7cc474cdx5b9b42c91dd74acb@mail.gmail.com> Message-ID: <20080511171553.GI28418@phare.normalesup.org> On Tue, May 06, 2008 at 04:42:06PM +0800, Roger Herikstad wrote: > Thanks alot! I was looking at hdf5 as an alternative, and by your > description I think it might suit my needs. I've been considering > using PyTables for a while, but never had the initiative to do so, but > I guess this is it... My one concern is to make this as invisible to > pure matlab users as possible. I have written some matlab code to do the saving to/loading from hdf5 as a drop-in replacement of the vanilla matlab save/load functions. The code might not work on every matlab structure, they work on the ones we used. The code is available on http://www.scipy.org/Cookbook/hdf5_in_Matlab . Feel free to improve it and put a new version there. Cheers, Ga?l From gael.varoquaux at normalesup.org Sun May 11 13:19:19 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 11 May 2008 19:19:19 +0200 Subject: [SciPy-user] common storage between matlab and python In-Reply-To: <3aou9wom.fsf@wgmail2.gatwick.eur.slb.com> References: <5eec5f300805052310i7cc474cdx5b9b42c91dd74acb@mail.gmail.com> <3aou9wom.fsf@wgmail2.gatwick.eur.slb.com> Message-ID: <20080511171919.GJ28418@phare.normalesup.org> On Wed, May 07, 2008 at 12:32:25PM +0100, Pete Forman wrote: > MATLAB has read and write capabilities for HDF5 though I do not know > how well your MATLAB classes are done. They used to be really bad. > MATLAB's save function will take a -v7.3 option to use the HDF5-based > version of the MATLAB MAT-file. That, on the contrary, is fantastic news. I am no longer a heavy user of matlab (it must be a couple of years since I last coded anything in matlab), but it is very handy to know this. Thanks for keeping us informed. This means the death of my matlab scripts, and that's good news: the less code to maintain the better. Cheers, Ga?l From gary.pajer at gmail.com Mon May 12 14:27:20 2008 From: gary.pajer at gmail.com (Gary Pajer) Date: Mon, 12 May 2008 14:27:20 -0400 Subject: [SciPy-user] hilbert on arrays Message-ID: <88fe22a0805121127kabb38dtd41291f52a9e1989@mail.gmail.com> I have a real data array. I'd like to take the hilbert transform and end up with an array where each row contains the HT of the corresponding data row. In other words, if my data is an (2,N) array called d, I'd like to end up with an array h such that h[0] is the HT of d[0] h[1] is the HT of d[1] but evidently scipy.signal.hilbert(d)[0] does not equal scipy.signal.hilbert(d[0]) Is there a one-step solution to this, or do I have to iterate over the rows of d, calling scipy.signal.hilbert each time ? thanks, -gary From cclarke at chrisdev.com Mon May 12 16:45:56 2008 From: cclarke at chrisdev.com (Christopher Clarke) Date: Mon, 12 May 2008 16:45:56 -0400 Subject: [SciPy-user] stats.py pstat.py mirror site Message-ID: Hi Gary Strangman's site appears to be down for a while now. Anybody have a link to a mirror site??? Thanks Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Mon May 12 16:51:22 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 12 May 2008 22:51:22 +0200 Subject: [SciPy-user] hilbert on arrays In-Reply-To: <88fe22a0805121127kabb38dtd41291f52a9e1989@mail.gmail.com> References: <88fe22a0805121127kabb38dtd41291f52a9e1989@mail.gmail.com> Message-ID: <9457e7c80805121351m5d562b44ub138eb29359d9c04@mail.gmail.com> 2008/5/12 Gary Pajer : > Is there a one-step solution to this, or do I have to iterate over the > rows of d, calling scipy.signal.hilbert each time ? You can always get someone else to do the dirty work for you: Definition: np.apply_along_axis(func1d, axis, arr, *args) Docstring: Execute func1d(arr[i],*args) where func1d takes 1-D arrays and arr is an N-d array. i varies so as to apply the function along the given axis for each 1-d subarray in arr. Cheers St?fan From strang at nmr.mgh.harvard.edu Mon May 12 19:09:34 2008 From: strang at nmr.mgh.harvard.edu (Gary Strangman) Date: Mon, 12 May 2008 19:09:34 -0400 (EDT) Subject: [SciPy-user] stats.py pstat.py mirror site In-Reply-To: References: Message-ID: As far as I am can tell, my site is live-and-kicking at ... http://www.nmr.mgh.harvard.edu/nsg/strang/python.html -best Gary On Mon, 12 May 2008, Christopher Clarke wrote: > Hi > Gary Strangman's site appears to be down for a while now. Anybody have a > link to a mirror site??? > Thanks > Chris From grs2103 at columbia.edu Mon May 12 20:52:59 2008 From: grs2103 at columbia.edu (Gideon Simpson) Date: Mon, 12 May 2008 20:52:59 -0400 Subject: [SciPy-user] UMFPACK error Message-ID: <5D0119C3-B205-49DC-A445-889A2BE49E77@columbia.edu> Not sure what I've done wrong, but I fed numpy the locations of my UMFPACK and AMD software, and both numpy 1.0.4 and scipy 0.6.0 seemed to install properly. Also their test suites ran as expected. However, if I do from scipy import * I get: Traceback (most recent call last): File "", line 1, in File "/opt/lib/python2.5/site-packages/scipy/linsolve/__init__.py", line 5, in import umfpack File "/opt/lib/python2.5/site-packages/scipy/linsolve/umfpack/ __init__.py", line 3, in from umfpack import * File "/opt/lib/python2.5/site-packages/scipy/linsolve/umfpack/ umfpack.py", line 186, in UMFPACK_WARNING_determinant_underflow : 'UMFPACK_WARNING_determinant_underflow', NameError: name 'UMFPACK_WARNING_determinant_underflow' is not defined -gideon From cclarke at chrisdev.com Mon May 12 21:32:01 2008 From: cclarke at chrisdev.com (Christopher Clarke) Date: Mon, 12 May 2008 21:32:01 -0400 Subject: [SciPy-user] stats.py pstat.py mirror site In-Reply-To: References: Message-ID: I don't know the site is really slow for me. No worries i eventually got both files from the google cache Regards Chris On 12 May 2008, at 19:09, Gary Strangman wrote: > > As far as I am can tell, my site is live-and-kicking at ... > > http://www.nmr.mgh.harvard.edu/nsg/strang/python.html > > -best > Gary > > On Mon, 12 May 2008, Christopher Clarke wrote: > >> Hi >> Gary Strangman's site appears to be down for a while now. Anybody >> have a >> link to a mirror site??? >> Thanks >> Chris > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From cohen at slac.stanford.edu Tue May 13 04:40:31 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Tue, 13 May 2008 10:40:31 +0200 Subject: [SciPy-user] help with precision for big numbers Message-ID: <482953FF.5090505@slac.stanford.edu> Hello, I am computing : In [22]: for i in range(6): s=(1.+Toff/Ton)**i*sp.factorial(Non+Noff-i)/sp.factorial(Non-i) print "%.14g"%s ....: ....: 4.3585218122217e+42 9.7493251062853e+42 1.7917678573714e+43 2.5383377979428e+43 2.4658138608587e+43 1.2329069304293e+43 A colleague using GSL and C code with double precision and long double ( I am not sure whether he has a 64bit machine) obtained the following values : 4.3585218122216e+42 1.0131651581042e+43 1.9350541758386e+43 2.8488297588735e+43 2.8759614708627e+43 1.4943721368208e+43 Close but not identical...... I was wondering if there is a way to increase numerical accuracy within scipy, assuming the standard behavior is not optimal with this respect. Or any other thoughts about these discrepancies? Or some nifty tricks to recover lost precision by organizing the computation differently? thanks in advance, Johann From cohen at slac.stanford.edu Tue May 13 04:47:11 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Tue, 13 May 2008 10:47:11 +0200 Subject: [SciPy-user] help with precision for big numbers In-Reply-To: <482953FF.5090505@slac.stanford.edu> References: <482953FF.5090505@slac.stanford.edu> Message-ID: <4829558F.9080600@slac.stanford.edu> sorry, for people who might want to check with their computers : Non=5 Noff=33 Ton=60 Toff=1000 Johann Cohen-Tanugi wrote: > Hello, > I am computing : > > In [22]: for i in range(6): > s=(1.+Toff/Ton)**i*sp.factorial(Non+Noff-i)/sp.factorial(Non-i) > print "%.14g"%s > ....: > ....: > 4.3585218122217e+42 > 9.7493251062853e+42 > 1.7917678573714e+43 > 2.5383377979428e+43 > 2.4658138608587e+43 > 1.2329069304293e+43 > > A colleague using GSL and C code with double precision and long double ( > I am not sure whether he has a 64bit machine) obtained the following > values : > 4.3585218122216e+42 > 1.0131651581042e+43 > 1.9350541758386e+43 > 2.8488297588735e+43 > 2.8759614708627e+43 > 1.4943721368208e+43 > > Close but not identical...... I was wondering if there is a way to > increase numerical accuracy within scipy, assuming the standard behavior > is not optimal with this respect. Or any other thoughts about these > discrepancies? Or some nifty tricks to recover lost precision by > organizing the computation differently? > > thanks in advance, > Johann > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From eads at soe.ucsc.edu Tue May 13 05:32:33 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Tue, 13 May 2008 02:32:33 -0700 Subject: [SciPy-user] help with precision for big numbers In-Reply-To: <482953FF.5090505@slac.stanford.edu> References: <482953FF.5090505@slac.stanford.edu> Message-ID: <48296031.3000609@soe.ucsc.edu> Hi Johann, First off, the first part of the expression s=... yields two different answers, depending whether you cast Toff to a float or not. In [1]: 1.+Toff/Ton Out[1]: 17.0 In [2]: 1.+float(Toff)/Ton Out[2]: 17.666666666666668 Which is the desired behavior for your problem? The limit of precision of floating point numbers in native Python is 32-bit. Numpy defines extra scalar types and you will find most of the ones supported by your machine in the numpy package. np.float64 will give you 64-bit precision. There is a np.float96 for 96-bit floats but I've never used it before. Damian Johann Cohen-Tanugi wrote: > Hello, > I am computing : > > In [22]: for i in range(6): > s=(1.+Toff/Ton)**i*sp.factorial(Non+Noff-i)/sp.factorial(Non-i) > print "%.14g"%s > ....: > ....: > 4.3585218122217e+42 > 9.7493251062853e+42 > 1.7917678573714e+43 > 2.5383377979428e+43 > 2.4658138608587e+43 > 1.2329069304293e+43 > > A colleague using GSL and C code with double precision and long double ( > I am not sure whether he has a 64bit machine) obtained the following > values : > 4.3585218122216e+42 > 1.0131651581042e+43 > 1.9350541758386e+43 > 2.8488297588735e+43 > 2.8759614708627e+43 > 1.4943721368208e+43 > > Close but not identical...... I was wondering if there is a way to > increase numerical accuracy within scipy, assuming the standard behavior > is not optimal with this respect. Or any other thoughts about these > discrepancies? Or some nifty tricks to recover lost precision by > organizing the computation differently? > > thanks in advance, > Johann > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From cohen at slac.stanford.edu Tue May 13 05:58:35 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Tue, 13 May 2008 11:58:35 +0200 Subject: [SciPy-user] help with precision for big numbers In-Reply-To: <48296031.3000609@soe.ucsc.edu> References: <482953FF.5090505@slac.stanford.edu> <48296031.3000609@soe.ucsc.edu> Message-ID: <4829664B.1070201@slac.stanford.edu> thanks Damian, dammit, yes of course I got bitten again by the int/int=int feature of python 2.x. Supposed to vanish in python 3, right..... looking forward to it! Johann Damian Eads wrote: > Hi Johann, > > First off, the first part of the expression s=... yields two different > answers, depending whether you cast Toff to a float or not. > > In [1]: 1.+Toff/Ton > Out[1]: 17.0 > > In [2]: 1.+float(Toff)/Ton > Out[2]: 17.666666666666668 > > Which is the desired behavior for your problem? > > The limit of precision of floating point numbers in native Python is > 32-bit. Numpy defines extra scalar types and you will find most of the > ones supported by your machine in the numpy package. np.float64 will > give you 64-bit precision. There is a np.float96 for 96-bit floats but > I've never used it before. > > Damian > > Johann Cohen-Tanugi wrote: > >> Hello, >> I am computing : >> >> In [22]: for i in range(6): >> s=(1.+Toff/Ton)**i*sp.factorial(Non+Noff-i)/sp.factorial(Non-i) >> print "%.14g"%s >> ....: >> ....: >> 4.3585218122217e+42 >> 9.7493251062853e+42 >> 1.7917678573714e+43 >> 2.5383377979428e+43 >> 2.4658138608587e+43 >> 1.2329069304293e+43 >> >> A colleague using GSL and C code with double precision and long double ( >> I am not sure whether he has a 64bit machine) obtained the following >> values : >> 4.3585218122216e+42 >> 1.0131651581042e+43 >> 1.9350541758386e+43 >> 2.8488297588735e+43 >> 2.8759614708627e+43 >> 1.4943721368208e+43 >> >> Close but not identical...... I was wondering if there is a way to >> increase numerical accuracy within scipy, assuming the standard behavior >> is not optimal with this respect. Or any other thoughts about these >> discrepancies? Or some nifty tricks to recover lost precision by >> organizing the computation differently? >> >> thanks in advance, >> Johann >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From c.j.lee at tnw.utwente.nl Tue May 13 09:21:58 2008 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Tue, 13 May 2008 15:21:58 +0200 Subject: [SciPy-user] Mac OS X and mkl and modifying site.cfg Message-ID: <5B9379AA-EAC6-414E-B27E-5BCA533FD330@tnw.utwente.nl> Hi Everyone, I know this question has been asked before but I can't seem to find the answer and certainly what I am doing is not working. I want to link my numpy install to the MKL libraries on Mac OS X (10.5) I have installed the mkl libraries and they have ended up buried in / Libraries/frameworks... I have edited the site.cfg package to read: [DEFAULT] library_dirs = /usr/local/lib:/opt/intel/fc/10.1.014/lib include_dirs = /usr/local/include:/opt/intel/fc/10.1.014/include [mkl] library_dirs = /Library/Frameworks/Intel_MKL.framework/Libraries/ universal mkl_libs = mkl, vml include_dirs = /Library/Frameworks/Intel_MKL.framework/Headers Python seems to find the intel fortran compiler all right but it doesn't appear to even look for the mkl libraries I have also tried other variations on this but it seems that the site.cfg file is ignored by setup.py When I looked in setup.py, I noticed that configuration expects to add site.cfg.example. I changed that to site.cfg but it didn't make any difference. If anyone knows what the site.cfg file should look like for a default Mac OS X install with mkl, would they please let me know? Thank you all very much. Cheers Chris *************************************************** Chris Lee Laser Physics and Nonlinear Optics Group MESA+ Research Institute for Nanotechnology University of Twente Phone: ++31 (0)53 489 3968 fax: ++31 (0)53 489 1102 *************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Tue May 13 10:17:06 2008 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 13 May 2008 10:17:06 -0400 Subject: [SciPy-user] help with precision for big numbers In-Reply-To: <482953FF.5090505@slac.stanford.edu> References: <482953FF.5090505@slac.stanford.edu> Message-ID: On Tue, 13 May 2008, Johann Cohen-Tanugi apparently wrote: > sp.factorial(Non+Noff-i)/sp.factorial(Non-i) So Damien deduced the problem. But if you are doing this a lot, I wonder if you can get some gain by replacing the above expression with a helper function. (I just mean to observe that once you compute 33! the other values of this expression are very simple multiples of 33!. fwiw, Alan Isaac From lev at columbia.edu Tue May 13 10:28:56 2008 From: lev at columbia.edu (Lev Givon) Date: Tue, 13 May 2008 10:28:56 -0400 Subject: [SciPy-user] help with precision for big numbers In-Reply-To: <48296031.3000609@soe.ucsc.edu> References: <482953FF.5090505@slac.stanford.edu> <48296031.3000609@soe.ucsc.edu> Message-ID: <20080513142855.GA12021@localhost.cc.columbia.edu> Received from Damian Eads on Tue, May 13, 2008 at 05:32:33AM EDT: (snip) > The limit of precision of floating point numbers in native Python is > 32-bit. Numpy defines extra scalar types and you will find most of the > ones supported by your machine in the numpy package. np.float64 will > give you 64-bit precision. There is a np.float96 for 96-bit floats but > I've never used it before. > 128-bit floats are also available on certain machines. > Damian Although it isn't as fast as similar packages, mpmath is useful if one occasionally needs to handle arbitrary precision in Python - especially considering that it provides a number of special functions (like gamma and factorial) apart from the usual transcendentals: http://code.google.com/p/mpmath L.G. From cohen at slac.stanford.edu Tue May 13 10:39:04 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Tue, 13 May 2008 16:39:04 +0200 Subject: [SciPy-user] help with precision for big numbers In-Reply-To: References: <482953FF.5090505@slac.stanford.edu> Message-ID: <4829A808.9070608@slac.stanford.edu> Right, I was considering that in case I would hit a performance penalty, but that would be in computation time, not in precision, right? But for now I am not ready to trade code readability for performance. And yes, that was worth it, and a valid point...As a matter of fact such helper function could be part of scipy, if it is not there already. Johann Alan G Isaac wrote: > On Tue, 13 May 2008, Johann Cohen-Tanugi apparently wrote: > >> sp.factorial(Non+Noff-i)/sp.factorial(Non-i) >> > > So Damien deduced the problem. But if you are doing this > a lot, I wonder if you can get some gain by replacing the > above expression with a helper function. (I just mean to > observe that once you compute 33! the other values of this > expression are very simple multiples of 33!. > > fwiw, > Alan Isaac > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From peridot.faceted at gmail.com Tue May 13 12:29:17 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 13 May 2008 12:29:17 -0400 Subject: [SciPy-user] help with precision for big numbers In-Reply-To: <4829664B.1070201@slac.stanford.edu> References: <482953FF.5090505@slac.stanford.edu> <48296031.3000609@soe.ucsc.edu> <4829664B.1070201@slac.stanford.edu> Message-ID: 2008/5/13 Johann Cohen-Tanugi : > thanks Damian, > dammit, yes of course I got bitten again by the int/int=int feature of > python 2.x. Supposed to vanish in python 3, right..... looking forward > to it! Do you know about "from __future__ import true_division"? Anne From cohen at slac.stanford.edu Tue May 13 12:29:26 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Tue, 13 May 2008 18:29:26 +0200 Subject: [SciPy-user] help with precision for big numbers In-Reply-To: References: <482953FF.5090505@slac.stanford.edu> <48296031.3000609@soe.ucsc.edu> <4829664B.1070201@slac.stanford.edu> Message-ID: <4829C1E6.7010005@slac.stanford.edu> sure about the syntax? In [5]: import __future__ In [6]: from __future__ import true_division ------------------------------------------------------------ SyntaxError: future feature true_division is not defined (, line 1) Johann Anne Archibald wrote: > 2008/5/13 Johann Cohen-Tanugi : > >> thanks Damian, >> dammit, yes of course I got bitten again by the int/int=int feature of >> python 2.x. Supposed to vanish in python 3, right..... looking forward >> to it! >> > > Do you know about "from __future__ import true_division"? > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From aisaac at american.edu Tue May 13 12:40:02 2008 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 13 May 2008 12:40:02 -0400 Subject: [SciPy-user] help with precision for big numbers In-Reply-To: References: <482953FF.5090505@slac.stanford.edu><48296031.3000609@soe.ucsc.edu> <4829664B.1070201@slac.stanford.edu> Message-ID: On Tue, 13 May 2008, Anne Archibald apparently wrote: > Do you know about "from __future__ import true_division"? ``from __future__ import division`` Cheers, Alan From robert.kern at gmail.com Tue May 13 13:16:45 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 13 May 2008 12:16:45 -0500 Subject: [SciPy-user] help with precision for big numbers In-Reply-To: <48296031.3000609@soe.ucsc.edu> References: <482953FF.5090505@slac.stanford.edu> <48296031.3000609@soe.ucsc.edu> Message-ID: <3d375d730805131016l2c69299g5eeb07513496d89c@mail.gmail.com> On Tue, May 13, 2008 at 4:32 AM, Damian Eads wrote: > The limit of precision of floating point numbers in native Python is > 32-bit. Incorrect. Python floats are 64-bit doubles. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue May 13 13:28:16 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 13 May 2008 12:28:16 -0500 Subject: [SciPy-user] Mac OS X and mkl and modifying site.cfg In-Reply-To: <5B9379AA-EAC6-414E-B27E-5BCA533FD330@tnw.utwente.nl> References: <5B9379AA-EAC6-414E-B27E-5BCA533FD330@tnw.utwente.nl> Message-ID: <3d375d730805131028r202e5389ued0933cc64269b4b@mail.gmail.com> On Tue, May 13, 2008 at 8:21 AM, Chris Lee wrote: > Hi Everyone, > > I know this question has been asked before but I can't seem to find the > answer and certainly what I am doing is not working. > > I want to link my numpy install to the MKL libraries on Mac OS X (10.5) > > I have installed the mkl libraries and they have ended up buried in > /Libraries/frameworks... > I have edited the site.cfg package to read: > > > [DEFAULT] > library_dirs = /usr/local/lib:/opt/intel/fc/10.1.014/lib > include_dirs = /usr/local/include:/opt/intel/fc/10.1.014/include > > > [mkl] > library_dirs = /Library/Frameworks/Intel_MKL.framework/Libraries/universal > mkl_libs = mkl, vml > include_dirs = /Library/Frameworks/Intel_MKL.framework/Headers > > Python seems to find the intel fortran compiler all right but it doesn't > appear to even look for the mkl libraries Can you show us the full build log? > I have also tried other variations on this but it seems that the site.cfg > file is ignored by setup.py > When I looked in setup.py, I noticed that configuration expects to add > site.cfg.example. That's just for building the source tarball. It doesn't affect the loading of the configuration for the binary build. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From eads at soe.ucsc.edu Tue May 13 15:04:37 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Tue, 13 May 2008 12:04:37 -0700 Subject: [SciPy-user] help with precision for big numbers In-Reply-To: <3d375d730805131016l2c69299g5eeb07513496d89c@mail.gmail.com> References: <482953FF.5090505@slac.stanford.edu> <48296031.3000609@soe.ucsc.edu> <3d375d730805131016l2c69299g5eeb07513496d89c@mail.gmail.com> Message-ID: <4829E645.4040409@soe.ucsc.edu> Robert Kern wrote: > On Tue, May 13, 2008 at 4:32 AM, Damian Eads wrote: >> The limit of precision of floating point numbers in native Python is >> 32-bit. > > Incorrect. Python floats are 64-bit doubles. I stand corrected. It has been incorrect in my mind for quite some time then. Damian From martin.wiechert at gmx.de Wed May 14 09:15:57 2008 From: martin.wiechert at gmx.de (Martin Wiechert) Date: Wed, 14 May 2008 15:15:57 +0200 Subject: [SciPy-user] permanent of a matrix Message-ID: <200805141515.57982.martin.wiechert@gmx.de> Hi, does anyone on this list happen to know of a software that can compute the permanent of a matrix? I would be very grateful. Thanks, Martin From camillo.lafleche at yahoo.com Wed May 14 17:41:41 2008 From: camillo.lafleche at yahoo.com (Camillo Lafleche) Date: Wed, 14 May 2008 14:41:41 -0700 (PDT) Subject: [SciPy-user] can NumPy use parallel linear algebra libraries? Message-ID: <960545.76511.qm@web46412.mail.sp1.yahoo.com> Hi! Besides some discussions about mpi4py for SciPy I couldn't find much information whether SciPy is ready to use parallel numerical algorithms. With pBLAS (http://www.netlib.org/scalapack/pblas_qref.html) a parallel linear algebra library is available. Because NumPy is built on top of BLAS, I wonder whether you could accelerate cluster computations by using pBLAS? Higher level linear algebra routines than (p)BLAS should give the same advantages for NumPy functions like solve() and svd(). Am I naive if I assume that small changes in the libraries used for the NumPy compilation can enable parallel computing? Or is this option already available? I'm grateful to receive any information about how to perform efficient linear algebra with NumPy on an MPI cluster. Thanks Camillo -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed May 14 17:52:36 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 14 May 2008 16:52:36 -0500 Subject: [SciPy-user] can NumPy use parallel linear algebra libraries? In-Reply-To: <960545.76511.qm@web46412.mail.sp1.yahoo.com> References: <960545.76511.qm@web46412.mail.sp1.yahoo.com> Message-ID: <3d375d730805141452x2392766em1913b708e9b7aab6@mail.gmail.com> On Wed, May 14, 2008 at 4:41 PM, Camillo Lafleche wrote: > Hi! > Besides some discussions about mpi4py for SciPy I couldn't find much > information whether SciPy is ready to use parallel numerical algorithms. Nothing in scipy has been particularly tailored for parallel computation, no. > With pBLAS (http://www.netlib.org/scalapack/pblas_qref.html) a parallel > linear algebra library is available. Because NumPy is built on top of BLAS, > I wonder whether you could accelerate cluster computations by using pBLAS? > Higher level linear algebra routines than (p)BLAS should give the same > advantages for NumPy functions like solve() and svd(). Unfortunately, pBLAS is not an implementation of the BLAS interfaces which we use. Rather, it is a different set of interfaces covering the same functionality, but with the obvious additions to the subroutine signatures to describe the distributed matrices. > Am I naive if I assume that small changes in the libraries used for the > NumPy compilation can enable parallel computing? Or is this option already > available? Not across processes or machines, no. ATLAS can be compiled to use threads, though. > I'm grateful to receive any information about how to perform efficient > linear algebra with NumPy on an MPI cluster. Brian Granger is working on a distributed array type which can sit on top of MPI. You may also want to look at petsc4py and slepc4py (which uses petsc4py). PETSc's high-level focus is parallel PDEs, but it includes many lower-level tools for parallel linear algebra. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From camillo.lafleche at yahoo.com Wed May 14 18:07:09 2008 From: camillo.lafleche at yahoo.com (Camillo Lafleche) Date: Wed, 14 May 2008 15:07:09 -0700 (PDT) Subject: [SciPy-user] can NumPy use parallel linear algebra libraries? Message-ID: <700418.49092.qm@web46403.mail.sp1.yahoo.com> Thank you for the immediate and concise answer! >Unfortunately, pBLAS is not an implementation of the BLAS interfaces >which we use. Rather, it is a different set of interfaces covering the >same functionality, but with the obvious additions to the subroutine >signatures to describe the distributed matrices. Only one more question before I try the impossible: Is there any reason why it will be impossible to write a wrapper so that NumPy can invoke pBLAS through the BLAS interface, if the distributed storage and computation is taken care of by a dictionary of modifiers? Camillo -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed May 14 18:19:33 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 14 May 2008 17:19:33 -0500 Subject: [SciPy-user] can NumPy use parallel linear algebra libraries? In-Reply-To: <700418.49092.qm@web46403.mail.sp1.yahoo.com> References: <700418.49092.qm@web46403.mail.sp1.yahoo.com> Message-ID: <3d375d730805141519l65627436rc4142c04664d36bc@mail.gmail.com> On Wed, May 14, 2008 at 5:07 PM, Camillo Lafleche wrote: > Thank you for the immediate and concise answer! > >>Unfortunately, pBLAS is not an implementation of the BLAS interfaces >>which we use. Rather, it is a different set of interfaces covering the >>same functionality, but with the obvious additions to the subroutine >>signatures to describe the distributed matrices. > > Only one more question before I try the impossible: > Is there any reason why it will be impossible to write a wrapper so that > NumPy can invoke pBLAS through the BLAS interface, if the distributed > storage and computation is taken care of by a dictionary of modifiers? I don't think you will be realistically be able to build numpy.linalg against this, no. If LAPACK were written with *just* BLAS calls and treated matrices as opaque objects, then perhaps you could. However, I believe that many LAPACK subroutines will actually access elements of the matrices. Besides, the algorithms you would use on distributed matrices are different than you would use on a single machine. That's why there is ScaLAPACK. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ondrej at certik.cz Wed May 14 19:15:07 2008 From: ondrej at certik.cz (Ondrej Certik) Date: Thu, 15 May 2008 01:15:07 +0200 Subject: [SciPy-user] permanent of a matrix In-Reply-To: <200805141515.57982.martin.wiechert@gmx.de> References: <200805141515.57982.martin.wiechert@gmx.de> Message-ID: <85b5c3130805141615n3d2309acifaebd97629bb1e36@mail.gmail.com> On Wed, May 14, 2008 at 3:15 PM, Martin Wiechert wrote: > Hi, > > does anyone on this list happen to know of a software that can compute the > permanent of a matrix? I would be very grateful. Sage. CCing the author of the code. Ondrej From lorenzo.isella at gmail.com Thu May 15 10:52:05 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 15 May 2008 16:52:05 +0200 Subject: [SciPy-user] Standard error on linear regression coefficients Message-ID: Dear All, I happen to use a lot the module for linear regression included in scipy. I need to get the standard errors on the slope and the intercept and by looking at the help: Help on function linregress in module scipy.stats.stats: linregress(*args) Calculates a regression line on two arrays, x and y, corresponding to x,y pairs. If a single 2D array is passed, linregress finds dim with 2 levels and splits data into x,y pairs along that dim. Returns: slope, intercept, r, two-tailed prob, stderr-of-the-estimate I am a bit puzzled. I have only one entry for the standard error of the estimate. Should I not have two, namely one standard error for the slope and one for the intercept? I presume the one provided here is the one for the slope, but I fear I have misunderstood something. Or to re-phrase my question: how do I get the errors on the estimated slope and intercept with SciPy linregress? Many thanks Lorenzo From xavier.gnata at gmail.com Thu May 15 11:48:34 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Thu, 15 May 2008 17:48:34 +0200 Subject: [SciPy-user] how to compile UMFPACK ? In-Reply-To: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> References: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> Message-ID: <482C5B52.8010805@gmail.com> Hi, I have followed this tutorial : http://www.scipy.org/Installing_SciPy/Linux#head-1c4018a51422706809ee96a4db03ca0669f5f6d1 and it is fine for atlas but not for UMFPACK : On ubuntu hardy using gcc-4.3 and gfortran-4.3 (and not g77),I get this error : gcc -O3 -I../Include -I../../AMD/Include -I../../UFconfig -o umfpack_di_demo umfpack_di_demo.c ../Lib/libumfpack.a ../../AMD/Lib/libamd.a -L/usr/lib/gcc/x86_64-linux-gnu/4.3.1 -L/usr/local/lib/scipy/lib -llapack -lf77blas -lcblas -latlas -lgfortran -lm /usr/bin/ld: umfpack_di_demo: hidden symbol `__powidf2' in /usr/lib/gcc/x86_64-linux-gnu/4.3.1/libgcc.a(_powidf2.o) is referenced by DSO /usr/bin/ld: final link failed: Nonrepresentable section on output collect2: ld returned 1 exit status make[1]: *** [umfpack_di_demo] Error 1 I have no real clue where the problem is (could it be a bug in libgcc.a). However I can run scipy with UMFPACK waiting for a solution :) Xavier From robert.kern at gmail.com Thu May 15 12:51:49 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 15 May 2008 11:51:49 -0500 Subject: [SciPy-user] how to compile UMFPACK ? In-Reply-To: <482C5B52.8010805@gmail.com> References: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> <482C5B52.8010805@gmail.com> Message-ID: <3d375d730805150951v71bb30a9wf556e3f683eccfa3@mail.gmail.com> On Thu, May 15, 2008 at 10:48 AM, Xavier Gnata wrote: > Hi, > > I have followed this tutorial : > http://www.scipy.org/Installing_SciPy/Linux#head-1c4018a51422706809ee96a4db03ca0669f5f6d1 > > and it is fine for atlas but not for UMFPACK : > > On ubuntu hardy using gcc-4.3 and gfortran-4.3 (and not g77),I get this > error : > > gcc -O3 -I../Include -I../../AMD/Include -I../../UFconfig -o > umfpack_di_demo umfpack_di_demo.c ../Lib/libumfpack.a > ../../AMD/Lib/libamd.a -L/usr/lib/gcc/x86_64-linux-gnu/4.3.1 > -L/usr/local/lib/scipy/lib -llapack -lf77blas -lcblas -latlas > -lgfortran -lm > /usr/bin/ld: umfpack_di_demo: hidden symbol `__powidf2' in > /usr/lib/gcc/x86_64-linux-gnu/4.3.1/libgcc.a(_powidf2.o) is referenced > by DSO > /usr/bin/ld: final link failed: Nonrepresentable section on output > collect2: ld returned 1 exit status > make[1]: *** [umfpack_di_demo] Error 1 > > I have no real clue where the problem is (could it be a bug in libgcc.a). Where did this flag come from: -L/usr/lib/gcc/x86_64-linux-gnu/4.3.1 ? It looks like a flag that was explicitly added; if you are using gcc 4.3.1, that should be hidden from you. Can you doublecheck the versions of gcc gfortran that you are actually executing? I.e. gcc --version? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zhangchipr at gmail.com Thu May 15 13:20:24 2008 From: zhangchipr at gmail.com (zhang chi) Date: Fri, 16 May 2008 01:20:24 +0800 Subject: [SciPy-user] why the fft in scipy can't give the same result as the fft from matlab Message-ID: <90c482ab0805151020k324fcacw89f9f385a65e46f1@mail.gmail.com> hi ys = randn(64) Yf = fftpack.fft(ys,128) why Yf.shape=64,128? but in matlab the size is 128 X1 , Yf = fft(ys,128) -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu May 15 13:25:48 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 15 May 2008 12:25:48 -0500 Subject: [SciPy-user] Standard error on linear regression coefficients In-Reply-To: References: Message-ID: <3d375d730805151025y608f6198y59b1f51407303f34@mail.gmail.com> On Thu, May 15, 2008 at 9:52 AM, Lorenzo Isella wrote: > Dear All, > I happen to use a lot the module for linear regression included in scipy. > I need to get the standard errors on the slope and the intercept and > by looking at the help: > > Help on function linregress in module scipy.stats.stats: > > linregress(*args) > Calculates a regression line on two arrays, x and y, corresponding to > x,y pairs. If a single 2D array is passed, linregress finds dim with 2 > levels and splits data into x,y pairs along that dim. > > Returns: slope, intercept, r, two-tailed prob, stderr-of-the-estimate > > I am a bit puzzled. I have only one entry for the standard error of > the estimate. Should I not have two, namely one standard error for the > slope and one for the intercept? > I presume the one provided here is the one for the slope, but I fear I > have misunderstood something. Yes. The "Standard Error of the Estimate" is the root-mean-square of the residuals. C.f. http://www.tufts.edu/~gdallal/slrout.htm > Or to re-phrase my question: how do I get the errors on the estimated > slope and intercept with SciPy linregress? According to WikiPedia: http://en.wikipedia.org/wiki/Regression_analysis slope, intercept, r, prob2, see = linregress(x, y) mx = x.mean() sx2 = ((x-mx)**2).sum() sd_intercept = see * sqrt(1./len(x) + mx*mx/sx2) sd_slope = see * sqrt(1./sx2) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu May 15 13:35:43 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 15 May 2008 12:35:43 -0500 Subject: [SciPy-user] why the fft in scipy can't give the same result as the fft from matlab In-Reply-To: <90c482ab0805151020k324fcacw89f9f385a65e46f1@mail.gmail.com> References: <90c482ab0805151020k324fcacw89f9f385a65e46f1@mail.gmail.com> Message-ID: <3d375d730805151035p5d60194fqc14235b2d598ab32@mail.gmail.com> On Thu, May 15, 2008 at 12:20 PM, zhang chi wrote: > hi > ys = randn(64) > Yf = fftpack.fft(ys,128) > > why Yf.shape=64,128? In [1]: from numpy import * In [2]: from scipy import fftpack In [3]: ys = random.rand(64) In [4]: Yf = fftpack.fft(ys, 128) In [5]: Yf.shape Out[5]: (128,) Can you show us a complete piece of code with its output that demonstrates the behavior that you are seeing? What versions of numpy and scipy are you using? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Thu May 15 14:13:51 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 15 May 2008 14:13:51 -0400 Subject: [SciPy-user] Standard error on linear regression coefficients In-Reply-To: <3d375d730805151025y608f6198y59b1f51407303f34@mail.gmail.com> References: <3d375d730805151025y608f6198y59b1f51407303f34@mail.gmail.com> Message-ID: > On Thu, May 15, 2008 at 9:52 AM, Lorenzo Isella >> Or to re-phrase my question: how do I get the errors on >> the estimated slope and intercept with SciPy linregress? On Thu, 15 May 2008, Robert Kern apparently wrote: > According to WikiPedia: http://en.wikipedia.org/wiki/Regression_analysis You can also take a look at: http://code.google.com/p/econpy/source/browse/trunk/pytrix/ls.py hth, Alan Isaac From ryanlists at gmail.com Thu May 15 15:57:16 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 15 May 2008 14:57:16 -0500 Subject: [SciPy-user] problems wtih _sparsetools and SfePy Message-ID: I am trying to get started with SfePy and am having some troubles running their tests. I have everything working correctly on one computer, but not on my laptop. Both computers are running Ubuntu 7.10. My current problem is this: /usr/lib/python2.5/site-packages/scipy/sparse/sparsetools.py in csr_matvec(*args) 590 npy_clongdouble_wrapper Xx, npy_clongdouble_wrapper Yx) 591 """ --> 592 return _sparsetools.csr_matvec(*args) 593 594 def csc_matvec(*args): : 'module' object has no attribute 'csr_matvec' It seems that _sparetools.so is different on the two machines and I don't know why. Both are running In [1]: scipy.version.version Out[1]: '0.7.0.dev3851' from svn a while back. I just tried rebuilding and re-installing from source and noticed that _sparesetools was not affected. Where does _sparsetools come from and how would I change how it gets created? On the desktop without problems, the file is 7.8 megs, on the laptop with problems it is only 1.9 megs. So, it seems I have some configuration differences somewhere, but I don't know where. Thanks, Ryan From wnbell at gmail.com Thu May 15 16:09:07 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 15 May 2008 15:09:07 -0500 Subject: [SciPy-user] problems wtih _sparsetools and SfePy In-Reply-To: References: Message-ID: On Thu, May 15, 2008 at 2:57 PM, Ryan Krauss wrote: > > from svn a while back. I just tried rebuilding and re-installing from > source and noticed that _sparesetools was not affected. Where does > _sparsetools come from and how would I change how it gets created? On > the desktop without problems, the file is 7.8 megs, on the laptop with > problems it is only 1.9 megs. So, it seems I have some configuration > differences somewhere, but I don't know where. > Ryan, you shouldn't have a _sparsetools.so anymore. There should be a sparsetools/ directory with several shared objects inside (e.g. _csr.so). Try removing your scipy/build directory and site-packages/scipy directory and installing again. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From ryanlists at gmail.com Thu May 15 16:27:32 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 15 May 2008 15:27:32 -0500 Subject: [SciPy-user] problems wtih _sparsetools and SfePy In-Reply-To: References: Message-ID: Thanks Nathan. I seem to be up and running. On Thu, May 15, 2008 at 3:09 PM, Nathan Bell wrote: > On Thu, May 15, 2008 at 2:57 PM, Ryan Krauss wrote: >> >> from svn a while back. I just tried rebuilding and re-installing from >> source and noticed that _sparesetools was not affected. Where does >> _sparsetools come from and how would I change how it gets created? On >> the desktop without problems, the file is 7.8 megs, on the laptop with >> problems it is only 1.9 megs. So, it seems I have some configuration >> differences somewhere, but I don't know where. >> > > Ryan, you shouldn't have a _sparsetools.so anymore. There should be a > sparsetools/ directory with several shared objects inside (e.g. > _csr.so). > > Try removing your scipy/build directory and site-packages/scipy > directory and installing again. > > -- > Nathan Bell wnbell at gmail.com > http://graphics.cs.uiuc.edu/~wnbell/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From xavier.gnata at gmail.com Thu May 15 17:09:31 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Thu, 15 May 2008 23:09:31 +0200 Subject: [SciPy-user] how to compile UMFPACK ? In-Reply-To: <3d375d730805150951v71bb30a9wf556e3f683eccfa3@mail.gmail.com> References: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> <482C5B52.8010805@gmail.com> <3d375d730805150951v71bb30a9wf556e3f683eccfa3@mail.gmail.com> Message-ID: <482CA68B.7020505@gmail.com> Robert Kern wrote: > On Thu, May 15, 2008 at 10:48 AM, Xavier Gnata wrote: > >> Hi, >> >> I have followed this tutorial : >> http://www.scipy.org/Installing_SciPy/Linux#head-1c4018a51422706809ee96a4db03ca0669f5f6d1 >> >> and it is fine for atlas but not for UMFPACK : >> >> On ubuntu hardy using gcc-4.3 and gfortran-4.3 (and not g77),I get this >> error : >> >> gcc -O3 -I../Include -I../../AMD/Include -I../../UFconfig -o >> umfpack_di_demo umfpack_di_demo.c ../Lib/libumfpack.a >> ../../AMD/Lib/libamd.a -L/usr/lib/gcc/x86_64-linux-gnu/4.3.1 >> -L/usr/local/lib/scipy/lib -llapack -lf77blas -lcblas -latlas >> -lgfortran -lm >> /usr/bin/ld: umfpack_di_demo: hidden symbol `__powidf2' in >> /usr/lib/gcc/x86_64-linux-gnu/4.3.1/libgcc.a(_powidf2.o) is referenced >> by DSO >> /usr/bin/ld: final link failed: Nonrepresentable section on output >> collect2: ld returned 1 exit status >> make[1]: *** [umfpack_di_demo] Error 1 >> >> I have no real clue where the problem is (could it be a bug in libgcc.a). >> > > Where did this flag come from: -L/usr/lib/gcc/x86_64-linux-gnu/4.3.1 ? > > It looks like a flag that was explicitly added; if you are using gcc > 4.3.1, that should be hidden from you. Can you doublecheck the > versions of gcc gfortran that you are actually executing? I.e. gcc > --version? > well it comes from the tutorial but I have removed it : gcc -O3 -I../Include -I../../AMD/Include -I../../UFconfig -o umfpack_di_demo umfpack_di_demo.c ../Lib/libumfpack.a ../../AMD/Lib/libamd.a -L/usr/local/lib/scipy/lib -llapack -lf77blas -lcblas -latlas -lgfortran -lm /usr/bin/ld: umfpack_di_demo: hidden symbol `__powidf2' in /usr/lib/gcc/x86_64-linux-gnu/4.3.0/libgcc.a(_powidf2.o) is referenced by DSO /usr/bin/ld: final link failed: Nonrepresentable section on output collect2: ld returned 1 exit status make[1]: *** [umfpack_di_demo] Error 1 make[1]: Leaving directory `/usr/local/src/UMFPACK/Demo' make: *** [library] Error 2 gcc version 4.3.0 (Ubuntu 4.3.0-1ubuntu1) Xavier From robert.kern at gmail.com Thu May 15 18:31:37 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 15 May 2008 17:31:37 -0500 Subject: [SciPy-user] how to compile UMFPACK ? In-Reply-To: <482CA68B.7020505@gmail.com> References: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> <482C5B52.8010805@gmail.com> <3d375d730805150951v71bb30a9wf556e3f683eccfa3@mail.gmail.com> <482CA68B.7020505@gmail.com> Message-ID: <3d375d730805151531j70f31829idf0c7119c01daa3a@mail.gmail.com> On Thu, May 15, 2008 at 4:09 PM, Xavier Gnata wrote: > gcc -O3 -I../Include -I../../AMD/Include -I../../UFconfig -o > umfpack_di_demo umfpack_di_demo.c ../Lib/libumfpack.a > ../../AMD/Lib/libamd.a -L/usr/local/lib/scipy/lib -llapack -lf77blas > -lcblas -latlas -lgfortran -lm > /usr/bin/ld: umfpack_di_demo: hidden symbol `__powidf2' in > /usr/lib/gcc/x86_64-linux-gnu/4.3.0/libgcc.a(_powidf2.o) is referenced > by DSO > /usr/bin/ld: final link failed: Nonrepresentable section on output > collect2: ld returned 1 exit status > make[1]: *** [umfpack_di_demo] Error 1 > make[1]: Leaving directory `/usr/local/src/UMFPACK/Demo' > make: *** [library] Error 2 > > gcc version 4.3.0 (Ubuntu 4.3.0-1ubuntu1) Hmm. No idea. Sorry. AMD/UMFPACK/etc. did get recently bundled into libsuitesparse, though, so you may just try to $ apt-get install libsuitesparse libsuitesparse-dev -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From xavier.gnata at gmail.com Thu May 15 19:01:53 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Fri, 16 May 2008 01:01:53 +0200 Subject: [SciPy-user] how to compile UMFPACK ? In-Reply-To: <3d375d730805151531j70f31829idf0c7119c01daa3a@mail.gmail.com> References: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> <482C5B52.8010805@gmail.com> <3d375d730805150951v71bb30a9wf556e3f683eccfa3@mail.gmail.com> <482CA68B.7020505@gmail.com> <3d375d730805151531j70f31829idf0c7119c01daa3a@mail.gmail.com> Message-ID: <482CC0E1.90501@gmail.com> Robert Kern wrote: > On Thu, May 15, 2008 at 4:09 PM, Xavier Gnata wrote: > >> gcc -O3 -I../Include -I../../AMD/Include -I../../UFconfig -o >> umfpack_di_demo umfpack_di_demo.c ../Lib/libumfpack.a >> ../../AMD/Lib/libamd.a -L/usr/local/lib/scipy/lib -llapack -lf77blas >> -lcblas -latlas -lgfortran -lm >> /usr/bin/ld: umfpack_di_demo: hidden symbol `__powidf2' in >> /usr/lib/gcc/x86_64-linux-gnu/4.3.0/libgcc.a(_powidf2.o) is referenced >> by DSO >> /usr/bin/ld: final link failed: Nonrepresentable section on output >> collect2: ld returned 1 exit status >> make[1]: *** [umfpack_di_demo] Error 1 >> make[1]: Leaving directory `/usr/local/src/UMFPACK/Demo' >> make: *** [library] Error 2 >> >> gcc version 4.3.0 (Ubuntu 4.3.0-1ubuntu1) >> > > Hmm. No idea. Sorry. AMD/UMFPACK/etc. did get recently bundled into > libsuitesparse, though, so you may just try to > > $ apt-get install libsuitesparse libsuitesparse-dev > > ok it is much easier and *it works* :) ack, xavier From zhangchipr at gmail.com Thu May 15 20:18:39 2008 From: zhangchipr at gmail.com (zhang chi) Date: Fri, 16 May 2008 08:18:39 +0800 Subject: [SciPy-user] how to convert this matlab command to scipy? Message-ID: <90c482ab0805151718ra49e2e6uf3f2cf21b04ccf70@mail.gmail.com> hi How to convert this matlab command (Yf12(:) = CYf(mask);) to scipy command? where size(CYf) = (128 1) size(mask) = (128 128) -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri May 16 02:25:38 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 16 May 2008 01:25:38 -0500 Subject: [SciPy-user] how to convert this matlab command to scipy? In-Reply-To: <90c482ab0805151718ra49e2e6uf3f2cf21b04ccf70@mail.gmail.com> References: <90c482ab0805151718ra49e2e6uf3f2cf21b04ccf70@mail.gmail.com> Message-ID: <3d375d730805152325r1db1f953p50e4bf9b4260c204@mail.gmail.com> On Thu, May 15, 2008 at 7:18 PM, zhang chi wrote: > hi > How to convert this matlab command (Yf12(:) = CYf(mask);) to scipy > command? > where > size(CYf) = (128 1) > size(mask) = (128 128) Please describe the operation you want to perform in English. I don't speak fluent Matlab. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From hoytak at gmail.com Fri May 16 02:38:53 2008 From: hoytak at gmail.com (Hoyt Koepke) Date: Thu, 15 May 2008 23:38:53 -0700 Subject: [SciPy-user] how to convert this matlab command to scipy? In-Reply-To: <90c482ab0805151718ra49e2e6uf3f2cf21b04ccf70@mail.gmail.com> References: <90c482ab0805151718ra49e2e6uf3f2cf21b04ccf70@mail.gmail.com> Message-ID: <4db580fd0805152338s3f976ae3h9dd8e2d8dfab409b@mail.gmail.com> I have to say I haven't seen that particular operation in matlab before. Is mask a logical matrix or a matrix of indices? If it's a logical matrix, I really don't understand what it's trying to do. If it's a matrix of indices, then you might get the same behavior in numpy by using CYf[mask.ravel()] -- but I'm really not sure of this. Normally, logical mask operations have to match shape exactly, and index matrices are one dimensional.... --Hoyt On Thu, May 15, 2008 at 5:18 PM, zhang chi wrote: > hi > How to convert this matlab command (Yf12(:) = CYf(mask);) to scipy > command? > where > size(CYf) = (128 1) > size(mask) = (128 128) > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- +++++++++++++++++++++++++++++++++++ Hoyt Koepke UBC Department of Computer Science http://www.cs.ubc.ca/~hoytak/ hoytak at gmail.com +++++++++++++++++++++++++++++++++++ From eads at soe.ucsc.edu Fri May 16 03:05:19 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Fri, 16 May 2008 00:05:19 -0700 Subject: [SciPy-user] how to convert this matlab command to scipy? In-Reply-To: <90c482ab0805151718ra49e2e6uf3f2cf21b04ccf70@mail.gmail.com> References: <90c482ab0805151718ra49e2e6uf3f2cf21b04ccf70@mail.gmail.com> Message-ID: <482D322F.2040007@soe.ucsc.edu> zhang chi wrote: > How to convert this matlab command (Yf12(:) = CYf(mask);) to scipy > command? > where > size(CYf) = (128 1) > size(mask) = (128 128) Like Robert said, you should be more specific about what you want to do, and preferably describe it in English. The poster's code will not work in MATLAB for the code specified. # L=rand(128,128)>0.5; % generate a random 128x128 matrix of logicals # Q=rand(1,128); % generate a random 1x128 matrix of doubles # Q(L) % ??? Index exceeds matrix dimensions. However, if we reverse the size specifications so that L is 1x128 and Q is 128x128, it works. After deciphering the output, Q(L) returns values Q(1,i) in the first column where mask(i) is true (e.g. Q(L) and Q(1,L)' are equivalent). However, Q can be longer, in which case the mask is applied to the succeeding columns. However, given an incorrect code snippet without a problem description, it is not worth speculating what the poster is trying to do. Damian From opossumnano at gmail.com Fri May 16 04:01:42 2008 From: opossumnano at gmail.com (Tiziano Zito) Date: Fri, 16 May 2008 10:01:42 +0200 Subject: [SciPy-user] ANN: MDP 2.3 released! Message-ID: <20080516080142.GC24462@diamond.bccn-berlin> Dear NumPy and SciPy users, we are proud to announce release 2.3 of the Modular toolkit for Data Processing (MDP): a Python data processing framework. The base of readily available algorithms includes Principal Component Analysis (PCA and NIPALS), four flavors of Independent Component Analysis (CuBICA, FastICA, TDSEP, and JADE), Slow Feature Analysis, Independent Slow Feature Analysis, Gaussian Classifiers, Growing Neural Gas, Fisher Discriminant Analysis, Factor Analysis, Restricted Boltzmann Machine, and many more. What's new in version 2.3? -------------------------- - Enhanced PCA nodes (with SVD, automatic dimensionality reduction, and iterative algorithms). - A complete implementation of the FastICA algorithm. - JADE and TDSEP nodes for more fun with ICA. - Restricted Boltzmann Machine nodes. - The new subpackage "hinet" allows combining nodes in arbitrary feed-forward network architectures with a HTML visualization tool. - The tutorial has been updated with a section on hierarchical networks. - MDP integrated into the official Debian repository as "python-mdp". - A bunch of bug-fixes. Resources --------- Download: http://sourceforge.net/project/showfiles.php?group_id=116959 Homepage: http://mdp-toolkit.sourceforge.net Mailing list: http://sourceforge.net/mail/?group_id=116959 -- Pietro Berkes Gatsby Computational Neuroscience Unit UCL London, United Kingdom Niko Wilbert Institute for Theoretical Biology Humboldt-University Berlin, Germany Tiziano Zito Bernstein Center for Computational Neuroscience Humboldt-University Berlin, Germany From zhangchipr at gmail.com Fri May 16 09:03:22 2008 From: zhangchipr at gmail.com (zhang chi) Date: Fri, 16 May 2008 21:03:22 +0800 Subject: [SciPy-user] how to resolve a cubic equation in scipy? Message-ID: <90c482ab0805160603o3543aed8g91132da15aead5ed@mail.gmail.com> hi I want to resolve a cubic equation 1.2r^3 + r - 20 = 0. Could scipy can complete this work? thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Fri May 16 09:17:33 2008 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 16 May 2008 09:17:33 -0400 Subject: [SciPy-user] how to resolve a cubic equation in scipy? In-Reply-To: <90c482ab0805160603o3543aed8g91132da15aead5ed@mail.gmail.com> References: <90c482ab0805160603o3543aed8g91132da15aead5ed@mail.gmail.com> Message-ID: On Fri, 16 May 2008, zhang chi apparently wrote: > I want to resolve a cubic equation 1.2r^3 + r - 20 = 0. Cubics have a closed form solution: http://mathworld.wolfram.com/CubicFormula.html hth, Alan Isaac From cohen at slac.stanford.edu Fri May 16 09:10:28 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Fri, 16 May 2008 15:10:28 +0200 Subject: [SciPy-user] how to resolve a cubic equation in scipy? In-Reply-To: References: <90c482ab0805160603o3543aed8g91132da15aead5ed@mail.gmail.com> Message-ID: <482D87C4.3010000@slac.stanford.edu> yep.... but if you insist, scipy can propose the following among I am sure many other and even better solutions : from scipy import optimize def myCubicEq(r): return 1.2*r**3 + r - 20 results=optimize.fsolve(myCubicEq, 5) print results,myCubicEq(results) I am too lazy/busy to check with the closed form solution. As a matter of fact, Alan, is the closed form algorithm implemented in scipy? I mean, why not.... :) Johann Alan G Isaac wrote: > On Fri, 16 May 2008, zhang chi apparently wrote: > >> I want to resolve a cubic equation 1.2r^3 + r - 20 = 0. >> > > Cubics have a closed form solution: > http://mathworld.wolfram.com/CubicFormula.html > > hth, > Alan Isaac > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From pav at iki.fi Fri May 16 09:57:01 2008 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 16 May 2008 13:57:01 +0000 (UTC) Subject: [SciPy-user] how to resolve a cubic equation in scipy? References: <90c482ab0805160603o3543aed8g91132da15aead5ed@mail.gmail.com> Message-ID: Fri, 16 May 2008 21:03:22 +0800, zhang chi wrote: > hi > I want to resolve a cubic equation 1.2r^3 + r - 20 = 0. > > Could scipy can complete this work? >>> scipy.roots([1.2, 0, 1, -20]) array([-1.22284347+2.30637633j, -1.22284347-2.30637633j, 2.44568694+0.j ]) From peridot.faceted at gmail.com Fri May 16 11:04:05 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 16 May 2008 11:04:05 -0400 Subject: [SciPy-user] how to resolve a cubic equation in scipy? In-Reply-To: References: <90c482ab0805160603o3543aed8g91132da15aead5ed@mail.gmail.com> Message-ID: 2008/5/16 Pauli Virtanen : > Fri, 16 May 2008 21:03:22 +0800, zhang chi wrote: >> I want to resolve a cubic equation 1.2r^3 + r - 20 = 0. > >>>> scipy.roots([1.2, 0, 1, -20]) > array([-1.22284347+2.30637633j, -1.22284347-2.30637633j, > 2.44568694+0.j ]) Given that the equation has just one real root, scipy's root-finders (e.g. brentq) should be reliable and fast. Of course, if the OP really only has the one equation to solve, it's done. But presumably they're working with a family of related cubics... Anne From dwf at cs.toronto.edu Fri May 16 13:31:29 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 16 May 2008 13:31:29 -0400 Subject: [SciPy-user] how to resolve a cubic equation in scipy? In-Reply-To: References: <90c482ab0805160603o3543aed8g91132da15aead5ed@mail.gmail.com> Message-ID: On 16-May-08, at 9:17 AM, Alan G Isaac wrote: > Cubics have a closed form solution: > http://mathworld.wolfram.com/CubicFormula.html Careful; the one that's most often presented in math textbooks is, IIRC, numerically unstable. David From xavier.gnata at gmail.com Fri May 16 17:33:39 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Fri, 16 May 2008 23:33:39 +0200 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) Message-ID: <482DFDB3.9040909@gmail.com> Hi, I have compiled ATLAS following http://www.scipy.org/Installing_SciPy/Linux#head-1c4018a51422706809ee96a4db03ca0669f5f6d1 on Ubuntu hardy gcc-4.3 64bits . everything look fine but scipy.test() fails in a quite bad way : ---------------------------------------------------------------------- Ran 1567 tests in 11.328s FAILED (failures=2, errors=12) Is it well-known that some tests of scipy 0.7.0.dev4373 fail ? I can send you the quite long log if needed. The point is that the user has no real way to know it there is something wrong in the a ATLAS installation or if the errors are in the testsuite. When I see for instance "ERROR: Failure: ImportError (cannot import name flapack)" I must admit that it looks like a wrong installation of ATLAS but what should I do now ?? ATLAS installation is time consuming and the documentation is really unclear except this tutorial. As a result, we need a robust testsuite to be able to know if scipy+ATLAS is correctly installed or not. Note that however numpy.test() is ruining just fine and that I can use scipy to do what I have to do. Xavier From robert.kern at gmail.com Fri May 16 17:37:20 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 16 May 2008 16:37:20 -0500 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <482DFDB3.9040909@gmail.com> References: <482DFDB3.9040909@gmail.com> Message-ID: <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> On Fri, May 16, 2008 at 4:33 PM, Xavier Gnata wrote: > Hi, > > I have compiled ATLAS following > http://www.scipy.org/Installing_SciPy/Linux#head-1c4018a51422706809ee96a4db03ca0669f5f6d1 > on Ubuntu hardy gcc-4.3 64bits . > > everything look fine but scipy.test() fails in a quite bad way : > ---------------------------------------------------------------------- > Ran 1567 tests in 11.328s > > FAILED (failures=2, errors=12) > > Is it well-known that some tests of scipy 0.7.0.dev4373 fail ? > I can send you the quite long log if needed. Just show us the parts at the end showing the failures. > The point is that the user has no real way to know it there is something > wrong in the a ATLAS installation or if the errors are in the testsuite. > > When I see for instance "ERROR: Failure: ImportError (cannot import name > flapack)" I must admit that it looks like a wrong installation of ATLAS > but what should I do now ?? It might just be a problem with the build of scipy rather than the ATLAS library itself. Show us scipy's build output. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From xavier.gnata at gmail.com Fri May 16 17:58:11 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Fri, 16 May 2008 23:58:11 +0200 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> References: <482DFDB3.9040909@gmail.com> <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> Message-ID: <482E0373.4060809@gmail.com> Robert Kern wrote: > On Fri, May 16, 2008 at 4:33 PM, Xavier Gnata wrote: > >> Hi, >> >> I have compiled ATLAS following >> http://www.scipy.org/Installing_SciPy/Linux#head-1c4018a51422706809ee96a4db03ca0669f5f6d1 >> on Ubuntu hardy gcc-4.3 64bits . >> >> everything look fine but scipy.test() fails in a quite bad way : >> ---------------------------------------------------------------------- >> Ran 1567 tests in 11.328s >> >> FAILED (failures=2, errors=12) >> >> Is it well-known that some tests of scipy 0.7.0.dev4373 fail ? >> I can send you the quite long log if needed. >> > > Just show us the parts at the end showing the failures. > > >> The point is that the user has no real way to know it there is something >> wrong in the a ATLAS installation or if the errors are in the testsuite. >> >> When I see for instance "ERROR: Failure: ImportError (cannot import name >> flapack)" I must admit that it looks like a wrong installation of ATLAS >> but what should I do now ?? >> > > It might just be a problem with the build of scipy rather than the > ATLAS library itself. Show us scipy's build output. > > Here it is. Ok it is long but I cannot cut it without removing an interesting part : /usr/lib/python2.5/site-packages/scipy/linsolve/__init__.py:4: DeprecationWarning: scipy.linsolve has moved to scipy.sparse.linalg.dsolve warn('scipy.linsolve has moved to scipy.sparse.linalg.dsolve', DeprecationWarning) /usr/lib/python2.5/site-packages/scipy/sparse/linalg/dsolve/linsolve.py:20: DeprecationWarning: scipy.sparse.linalg.dsolve.umfpack will be removed, install scikits.umfpack instead ' install scikits.umfpack instead', DeprecationWarning ) /usr/lib/python2.5/site-packages/scipy/splinalg/__init__.py:3: DeprecationWarning: scipy.splinalg has moved to scipy.sparse.linalg warn('scipy.splinalg has moved to scipy.sparse.linalg', DeprecationWarning) E..........................................E.............E............................... Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. ./usr/lib/python2.5/site-packages/numpy/lib/utils.py:114: DeprecationWarning: write_array is deprecated warnings.warn(str1, DeprecationWarning) /usr/lib/python2.5/site-packages/numpy/lib/utils.py:114: DeprecationWarning: read_array is deprecated warnings.warn(str1, DeprecationWarning) ..E................../usr/lib/python2.5/site-packages/numpy/lib/utils.py:114: DeprecationWarning: npfile is deprecated warnings.warn(str1, DeprecationWarning) ............................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ..EEE............................................................................................................................................................................................................................................................................................................................................................................................................/usr/lib/python2.5/site-packages/scipy/ndimage/_segmenter.py:30: UserWarning: The segmentation code is under heavy development and therefore the public API will change in the future. The NIPY group is actively working on this code, and has every intention of generalizing this for the Scipy community. Use this module minimally, if at all, until it this warning is removed. warnings.warn(_msg, UserWarning) ...F....................................E...........E.... _naupd: Number of update iterations taken ----------------------------------------- 1 - 1: 11 _naupd: Number of wanted "converged" Ritz values ------------------------------------------------ 1 - 1: 4 _naupd: Real part of the final Ritz values ------------------------------------------ 1 - 4: 1.033D+00 7.746D-01 5.164D-01 2.582D-01 _naupd: Imaginary part of the final Ritz values ----------------------------------------------- 1 - 4: 0.000D+00 0.000D+00 0.000D+00 0.000D+00 _naupd: Associated Ritz estimates --------------------------------- 1 - 4: 2.700D-15 6.598D-19 1.478D-22 4.431D-26 ============================================= = Nonsymmetric implicit Arnoldi update code = = Version Number: 2.4 = = Version Date: 07/31/96 = ============================================= = Summary of timing statistics = ============================================= Total number update iterations = 11 Total number of OP*x operations = 52 Total number of B*x operations = 0 Total number of reorthogonalization steps = 51 Total number of iterative refinement steps = 0 Total number of restart steps = 0 Total time in user OP*x operation = 0.004001 Total time in user B*x operation = 0.000000 Total time in Arnoldi update routine = 0.004001 Total time in naup2 routine = 0.004001 Total time in basic Arnoldi iteration loop = 0.004001 Total time in reorthogonalization phase = 0.000000 Total time in (re)start vector generation = 0.000000 Total time in Hessenberg eig. subproblem = 0.000000 Total time in getting the shifts = 0.000000 Total time in applying the shifts = 0.000000 Total time in convergence testing = 0.000000 Total time in computing final Ritz vectors = 0.000000 .EE........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................F...........................................................................................................................................................E...warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is not writable. Trying default locations ..warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is not writable. Trying default locations ............................building extensions here: /home/gnata/.python25_compiled/m12 ................................................................................................ ====================================================================== ERROR: Failure: ImportError (/usr/lib/python2.5/site-packages/scipy/linalg/clapack.so: undefined symbol: clapack_sgesv) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib/python2.5/site-packages/scipy/cluster/__init__.py", line 9, in import vq, hierarchy File "/usr/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 178, in import _hierarchy_wrap, scipy, types, math, sys, scipy.stats File "/usr/lib/python2.5/site-packages/scipy/stats/__init__.py", line 7, in from stats import * File "/usr/lib/python2.5/site-packages/scipy/stats/stats.py", line 192, in import scipy.linalg as linalg File "/usr/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 18, in from scipy.linalg import clapack ImportError: /usr/lib/python2.5/site-packages/scipy/linalg/clapack.so: undefined symbol: clapack_sgesv ====================================================================== ERROR: Failure: ImportError (cannot import name flapack) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib/python2.5/site-packages/scipy/integrate/tests/test_integrate.py", line 9, in from scipy.linalg import norm File "/usr/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: cannot import name flapack ====================================================================== ERROR: Failure: ImportError (cannot import name flapack) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib/python2.5/site-packages/scipy/interpolate/__init__.py", line 7, in from interpolate import * File "/usr/lib/python2.5/site-packages/scipy/interpolate/interpolate.py", line 13, in import scipy.linalg as slin File "/usr/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: cannot import name flapack ====================================================================== ERROR: test_integer (test_array_import.TestReadArray) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/io/tests/test_array_import.py", line 51, in test_integer from scipy import stats File "/usr/lib/python2.5/site-packages/scipy/stats/__init__.py", line 7, in from stats import * File "/usr/lib/python2.5/site-packages/scipy/stats/stats.py", line 192, in import scipy.linalg as linalg File "/usr/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: cannot import name flapack ====================================================================== ERROR: Failure: ImportError (/usr/lib/python2.5/site-packages/scipy/lib/lapack/clapack.so: undefined symbol: clapack_sgesv) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib/python2.5/site-packages/scipy/lib/lapack/__init__.py", line 16, in import clapack ImportError: /usr/lib/python2.5/site-packages/scipy/lib/lapack/clapack.so: undefined symbol: clapack_sgesv ====================================================================== ERROR: Failure: ImportError (cannot import name flapack) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: cannot import name flapack ====================================================================== ERROR: Failure: ImportError (cannot import name flapack) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib/python2.5/site-packages/scipy/maxentropy/__init__.py", line 9, in from maxentropy import * File "/usr/lib/python2.5/site-packages/scipy/maxentropy/maxentropy.py", line 75, in from scipy.linalg import norm File "/usr/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: cannot import name flapack ====================================================================== ERROR: Failure: ImportError (cannot import name flapack) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib/python2.5/site-packages/scipy/signal/__init__.py", line 11, in from ltisys import * File "/usr/lib/python2.5/site-packages/scipy/signal/ltisys.py", line 9, in import scipy.interpolate as interpolate File "/usr/lib/python2.5/site-packages/scipy/interpolate/__init__.py", line 7, in from interpolate import * File "/usr/lib/python2.5/site-packages/scipy/interpolate/interpolate.py", line 13, in import scipy.linalg as slin File "/usr/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: cannot import name flapack ====================================================================== ERROR: Failure: ImportError (cannot import name flapack) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib/python2.5/site-packages/scipy/sparse/linalg/dsolve/tests/test_linsolve.py", line 6, in from scipy.linalg import norm, inv File "/usr/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: cannot import name flapack ====================================================================== ERROR: Failure: ImportError (cannot import name flapack) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib/python2.5/site-packages/scipy/sparse/linalg/eigen/lobpcg/tests/test_lobpcg.py", line 8, in from scipy import array, arange, ones, sort, cos, pi, rand, \ File "/usr/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: cannot import name flapack ====================================================================== ERROR: Failure: ImportError (cannot import name flapack) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib/python2.5/site-packages/scipy/sparse/linalg/isolve/tests/test_iterative.py", line 9, in from scipy.linalg import norm File "/usr/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: cannot import name flapack ====================================================================== ERROR: Failure: ImportError (cannot import name flapack) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.5/site-packages/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib/python2.5/site-packages/scipy/stats/__init__.py", line 7, in from stats import * File "/usr/lib/python2.5/site-packages/scipy/stats/stats.py", line 192, in import scipy.linalg as linalg File "/usr/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: cannot import name flapack ====================================================================== FAIL: test_texture2 (test_segment.TestSegment) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/ndimage/tests/test_segment.py", line 152, in test_texture2 assert_array_almost_equal(tem0, truth_tem0, decimal=6) File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 255, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 240, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 66.6666666667%) x: array([ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.91816598e-01, 1.02515288e-01, 9.30087343e-02,... y: array([ 0. , 0. , 0. , 0. , 0. , 0. , 0.13306101, 0.08511007, 0.05084148, 0.07550675, 0.4334695 , 0.03715914, 0.00289055, 0.02755581, 0.48142046, 0.03137803, 0.00671277, 0.51568902, 0.01795249, 0.49102375, 1. ], dtype=float32) ====================================================================== FAIL: test_pbdv (test_basic.TestCephes) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", line 368, in test_pbdv assert_equal(cephes.pbdv(1,0),(0.0,0.0)) File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 139, in assert_equal assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k,err_msg), verbose) File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 145, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: item=1 ACTUAL: 1.0 DESIRED: 0.0 ---------------------------------------------------------------------- Ran 1567 tests in 13.438s FAILED (failures=2, errors=12) python setup.py build output : mkl_info: libraries mkl,vml,guide not found in /usr/lib libraries mkl,vml,guide not found in /usr/local/lib/scipy/lib NOT AVAILABLE fftw3_info: FOUND: libraries = ['fftw3'] library_dirs = ['/usr/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/include'] djbfft_info: NOT AVAILABLE blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/lib libraries mkl,vml,guide not found in /usr/local/lib/scipy/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib'] language = c include_dirs = ['/usr/include'] customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize LaheyFCompiler Could not locate executable lf95 customize PGroupFCompiler Could not locate executable pgf90 Could not locate executable pgf77 customize AbsoftFCompiler Could not locate executable f90 customize NAGFCompiler Found executable /usr/bin/f95 customize VastFCompiler customize GnuFCompiler customize CompaqFCompiler Could not locate executable fort customize IntelItaniumFCompiler Could not locate executable efort Could not locate executable efc customize IntelEM64TFCompiler customize Gnu95FCompiler Found executable /usr/bin/gfortran customize Gnu95FCompiler customize Gnu95FCompiler using config compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -llapack -lf77blas -lcblas -latlas -o _configtest ATLAS version 3.8.1 built by root on Tue May 13 00:29:20 CEST 2008: UNAME : Linux dupilon 2.6.24-16-generic #1 SMP Thu Apr 10 12:47:45 UTC 2008 x86_64 GNU/Linux INSTFLG : -1 0 -a 1 ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_Core2Duo -DATL_CPUMHZ=2201 -DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664 F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle CACHEEDGE: 2097152 F77 : gfortran, version GNU Fortran (Ubuntu 4.3.0-1ubuntu1) 4.3.0 F77FLAGS : -O -fPIC -m64 SMC : gcc, version gcc (Ubuntu 4.3.0-1ubuntu1) 4.3.0 SMCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 -fPIC SKC : gcc, version gcc (Ubuntu 4.3.0-1ubuntu1) 4.3.0 SKCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 -fPIC success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib'] language = c define_macros = [('ATLAS_INFO', '"\\"3.8.1\\""')] include_dirs = ['/usr/include'] ATLAS version 3.8.1 lapack_opt_info: lapack_mkl_info: NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_threads_info Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib'] language = f77 include_dirs = ['/usr/include'] customize GnuFCompiler customize IntelFCompiler customize LaheyFCompiler customize PGroupFCompiler customize AbsoftFCompiler customize NAGFCompiler customize VastFCompiler customize GnuFCompiler customize CompaqFCompiler customize IntelItaniumFCompiler customize IntelEM64TFCompiler customize Gnu95FCompiler customize Gnu95FCompiler customize Gnu95FCompiler using config compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -llapack -llapack -lf77blas -lcblas -latlas -o _configtest ATLAS version 3.8.1 built by root on Tue May 13 00:29:20 CEST 2008: UNAME : Linux dupilon 2.6.24-16-generic #1 SMP Thu Apr 10 12:47:45 UTC 2008 x86_64 GNU/Linux INSTFLG : -1 0 -a 1 ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_Core2Duo -DATL_CPUMHZ=2201 -DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664 F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle CACHEEDGE: 2097152 F77 : gfortran, version GNU Fortran (Ubuntu 4.3.0-1ubuntu1) 4.3.0 F77FLAGS : -O -fPIC -m64 SMC : gcc, version gcc (Ubuntu 4.3.0-1ubuntu1) 4.3.0 SMCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 -fPIC SKC : gcc, version gcc (Ubuntu 4.3.0-1ubuntu1) 4.3.0 SKCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 -fPIC success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib'] language = f77 define_macros = [('ATLAS_INFO', '"\\"3.8.1\\""')] include_dirs = ['/usr/include'] ATLAS version 3.8.1 ATLAS version 3.8.1 umfpack_info: amd_info: FOUND: libraries = ['amd'] library_dirs = ['/usr/lib'] swig_opts = ['-I/usr/include'] define_macros = [('SCIPY_AMD_H', None)] include_dirs = ['/usr/include'] FOUND: libraries = ['umfpack', 'gfortran', 'amd'] library_dirs = ['/usr/lib'] swig_opts = ['-I/usr/include', '-I/usr/include'] define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] include_dirs = ['/usr/include'] From robert.kern at gmail.com Fri May 16 18:15:54 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 16 May 2008 17:15:54 -0500 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <482E0373.4060809@gmail.com> References: <482DFDB3.9040909@gmail.com> <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> <482E0373.4060809@gmail.com> Message-ID: <3d375d730805161515i7c517bb2nf32fa0a2890197b3@mail.gmail.com> On Fri, May 16, 2008 at 4:58 PM, Xavier Gnata wrote: > Here it is. Ok it is long but I cannot cut it without removing an > interesting part : Well, the parts interesting to me are the ones starting here: > ====================================================================== > ERROR: Failure: ImportError > (/usr/lib/python2.5/site-packages/scipy/linalg/clapack.so: undefined > symbol: clapack_sgesv) > ---------------------------------------------------------------------- This looks like an ATLAS build problem. Specifically, it looks like you didn't make a full LAPACK for ATLAS. I'm not current on the build process for recent ATLASes. I *think* that if you pass the correct --with-netlib-lapack to ./configure, it will correctly merge a full LAPACK. But it might not. Instructions for doing this manually are here: http://svn.scipy.org/svn/scipy/trunk/INSTALL.txt This does not involve recompiling ATLAS. But you can probably save yourself a lot of trouble by just installing the atlas3-base (and possibly atlas3-sse or atlas3-sse2 if you have those capabilities) and their corresponding -dev packages. It's an old version, but provides a reasonable speedup for the effort expended. > ====================================================================== > FAIL: test_texture2 (test_segment.TestSegment) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.5/site-packages/scipy/ndimage/tests/test_segment.py", > line 152, in test_texture2 > assert_array_almost_equal(tem0, truth_tem0, decimal=6) > File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line > 255, in assert_array_almost_equal > header='Arrays are not almost equal') > File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line > 240, in assert_array_compare > assert cond, msg > AssertionError: > Arrays are not almost equal > > (mismatch 66.6666666667%) > x: array([ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, > 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, > 1.91816598e-01, 1.02515288e-01, 9.30087343e-02,... > y: array([ 0. , 0. , 0. , 0. , 0. , > 0. , 0.13306101, 0.08511007, 0.05084148, 0.07550675, > 0.4334695 , 0.03715914, 0.00289055, 0.02755581, 0.48142046, > 0.03137803, 0.00671277, 0.51568902, 0.01795249, 0.49102375, > 1. ], dtype=float32) This might be an actual bug in the (heavily in-development) image segmentation code. > ====================================================================== > FAIL: test_pbdv (test_basic.TestCephes) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", > line 368, in test_pbdv > assert_equal(cephes.pbdv(1,0),(0.0,0.0)) > File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line > 139, in assert_equal > assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k,err_msg), > verbose) > File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line > 145, in assert_equal > assert desired == actual, msg > AssertionError: > Items are not equal: > item=1 > > ACTUAL: 1.0 > DESIRED: 0.0 Not sure about this one. Probably a real bug. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From xavier.gnata at gmail.com Fri May 16 19:00:06 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sat, 17 May 2008 01:00:06 +0200 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <3d375d730805161515i7c517bb2nf32fa0a2890197b3@mail.gmail.com> References: <482DFDB3.9040909@gmail.com> <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> <482E0373.4060809@gmail.com> <3d375d730805161515i7c517bb2nf32fa0a2890197b3@mail.gmail.com> Message-ID: <482E11F6.9060401@gmail.com> Robert Kern wrote: > On Fri, May 16, 2008 at 4:58 PM, Xavier Gnata wrote: > > >> Here it is. Ok it is long but I cannot cut it without removing an >> interesting part : >> > > Well, the parts interesting to me are the ones starting here: > > >> ====================================================================== >> ERROR: Failure: ImportError >> (/usr/lib/python2.5/site-packages/scipy/linalg/clapack.so: undefined >> symbol: clapack_sgesv) >> ---------------------------------------------------------------------- >> > > This looks like an ATLAS build problem. Specifically, it looks like > you didn't make a full LAPACK for ATLAS. I'm not current on the build > process for recent ATLASes. I *think* that if you pass the correct > --with-netlib-lapack to ./configure, it will correctly merge a full > LAPACK. But it might not. Instructions for doing this manually are > here: > > http://svn.scipy.org/svn/scipy/trunk/INSTALL.txt > > This does not involve recompiling ATLAS. > > But you can probably save yourself a lot of trouble by just installing > the atlas3-base (and possibly atlas3-sse or atlas3-sse2 if you have > those capabilities) and their corresponding -dev packages. It's an old > version, but provides a reasonable speedup for the effort expended. > > >> ====================================================================== >> FAIL: test_texture2 (test_segment.TestSegment) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/usr/lib/python2.5/site-packages/scipy/ndimage/tests/test_segment.py", >> line 152, in test_texture2 >> assert_array_almost_equal(tem0, truth_tem0, decimal=6) >> File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line >> 255, in assert_array_almost_equal >> header='Arrays are not almost equal') >> File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line >> 240, in assert_array_compare >> assert cond, msg >> AssertionError: >> Arrays are not almost equal >> >> (mismatch 66.6666666667%) >> x: array([ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, >> 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, >> 1.91816598e-01, 1.02515288e-01, 9.30087343e-02,... >> y: array([ 0. , 0. , 0. , 0. , 0. , >> 0. , 0.13306101, 0.08511007, 0.05084148, 0.07550675, >> 0.4334695 , 0.03715914, 0.00289055, 0.02755581, 0.48142046, >> 0.03137803, 0.00671277, 0.51568902, 0.01795249, 0.49102375, >> 1. ], dtype=float32) >> > > This might be an actual bug in the (heavily in-development) image > segmentation code. > > >> ====================================================================== >> FAIL: test_pbdv (test_basic.TestCephes) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/usr/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", >> line 368, in test_pbdv >> assert_equal(cephes.pbdv(1,0),(0.0,0.0)) >> File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line >> 139, in assert_equal >> assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k,err_msg), >> verbose) >> File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line >> 145, in assert_equal >> assert desired == actual, msg >> AssertionError: >> Items are not equal: >> item=1 >> >> ACTUAL: 1.0 >> DESIRED: 0.0 >> > > Not sure about this one. Probably a real bug. > > Thanks for our support :) I used to use the atlas unbutu package but I would like to understand how to compile ATLAS and to see if there is a real performance improvement or not. As you said,it should note be that large... I have merge the lib using ar as explain in your link. Now I get this : FAILED (failures=5, errors=5) **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** ...........................FF.............NO ATLAS INFO AVAILABLE ......................................... **************************************************************** WARNING: cblas module is empty It looks there still something wrong isn't it? xavier From jh at physics.ucf.edu Sat May 17 01:45:32 2008 From: jh at physics.ucf.edu (Joe Harrington) Date: Sat, 17 May 2008 01:45:32 -0400 Subject: [SciPy-user] ANN: NumPy/SciPy Documentation Marathon 2008 Message-ID: NUMPY/SCIPY DOCUMENTATION MARATHON 2008 As we all know, the state of the numpy and scipy reference documentation (aka the docstrings) is best described as "incomplete". Most functions have docstrings shorter than 5 lines, whereas our competitors IDL and Matlab usually have a concise and well-written page or two per function. The (wonderful) categorized list of functions is very new and isn't included in the package yet. There isn't even a "Getting Started"-type of document you can hand a new user so they can dive right in. Documentation tools are limited to plain-text paginators, while our competition enjoys HTML-based documents with formulae, images, search capability, and cross linking. Tales of woe abound. A university class switched to Numpy and got hopelessly bogged down because students couldn't find out how to call the functions. A developer looked something up while giving a presentation and the words "Blah, Blah, Blah" stared down at the audience in response. To head off another pedagogical meltdown, the University of Central Florida has hired Stefan van der Walt full time to coordinate a community documentation effort to write reference documentation and tools. The project starts now and continues through the summer. The goals: 1. Produce complete docstrings for all numpy functions and as much of scipy as possible, 2. Produce an 8-15 page Getting Started tutorial that is not discipline-specific, 3. Write reference sections on topics in numpy, such as slicing and the use principles of the modules, 4. Complete a first edition, in both PDF and HTML, of a NumPy Reference Manual, and 5. Check everything into the sources by 1 August 2008 so that the Packaging Team can cut a release and have it available in time for Fall 2008 classes. Even Stefan could not document the hundreds of functions that need it by himself, and in any case such a large contribution requires community review. To make it easy for everyone to contribute, Pauli Virtanen and Emmanuelle Guillart have provided a wiki system for editing reference documentation. The idea was developed by Fernando Perez, Stefan, and Gael Varoquaux. We encourage community members to write, review, and proofread reference pages on this wiki. Stefan will check updates into the sources roughly weekly. Near the end of the project, we will put these wiki pages through a vetting process and then check them into the sources a final time for a release hopefully to occur in early August. Meanwhile, Perry Greenfield has taken the lead on on task 3, writing reference docs for things that currently don't have docstrings, such as basic concepts like slicing. We have proposed two small extensions to the current docstring format, for images (to be used sparingly) and indexing. These appear in updated versions of the doc standard, which are linked from the wiki frontpage. Please take a look and comment on these if you like. All docstrings will remain readable in plain text, but we are now generating a full reference guide in PDF and HTML (you guessed it, linked from the wiki). These are searchable formats. There are several ways you can help: 1. Write some docstrings on the wiki! Many people can do this, many more than can write code for the package itself. However, you must know numpy, the function group, and the function you are writing well. You should be familiar with the concept of a reference page and write in that concise style. We'll do tutorial docs in another project at a later date. See the instructions on the wiki for guidelines and format. 2. Review others' docstrings and leave comments on their wiki pages. 3. Proofread docstrings. Make sure they are correct, complete, and concise. Fix grammar. 4. Write examples ("doctests"). Even if you are not a top-notch English writer, you can help by producing a code snippet of a few lines that demonstrates a function. It is fine for them to go into the docstring templates before the actual text. 5. Write a new help function that optionally produces ASCII or points the user's PDF or HTML reader to the right page (either local or global). 6. If you are in a position to hire someone, such as a knowledgeable student or short-term consultant, hire them to work on the tasks above for the summer. We can provide supervision to them or guidance to you if you like. The home for this project is here: http://scipy.org/Developer_Zone/DocMarathon2008 This is not a sprint. It is a marathon, and this time we are going to finish. We hope you will join us! --jh-- and Stefan and Perry and Pauli and Emmanuelle...and you! Joe Harrington Stefan van der Walt Perry Greenfield Pauli Virtanen Emmanuelle Guillart ...and you! From david at ar.media.kyoto-u.ac.jp Sat May 17 04:23:46 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 17 May 2008 17:23:46 +0900 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <482E11F6.9060401@gmail.com> References: <482DFDB3.9040909@gmail.com> <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> <482E0373.4060809@gmail.com> <3d375d730805161515i7c517bb2nf32fa0a2890197b3@mail.gmail.com> <482E11F6.9060401@gmail.com> Message-ID: <482E9612.3040009@ar.media.kyoto-u.ac.jp> Xavier Gnata wrote: >> Not sure about this one. Probably a real bug. This error is often seen when mixing g77/gfortran. hardy still uses g77 as the default fortran compiler, so it is easy to make a mistake there. > Thanks for our support :) > I used to use the atlas unbutu package but I would like to understand > how to compile ATLAS and to see if there is a real performance > improvement or not. As you said,it should note be that large... It is large if you are using a recent core 2 duo: the atlas package (sse2) are built for pentium4 which has a deficient L1 cache, whereas core 2 duo has much better behaviour. For large matrices, this can be significant. How did you build lapack ? From lapack 3.1.1, you only need to use this as the make.inc: #################################################################### # LAPACK make include file. # # LAPACK, Version 3.1.1 # # February 2007 # #################################################################### # SHELL = /bin/sh # # The machine (platform) identifier to append to the library names # PLAT = _LINUX # # Modify the FORTRAN and OPTS definitions to refer to the # compiler and desired compiler options for your machine. NOOPT # refers to the compiler options desired when NO OPTIMIZATION is # selected. Define LOADER and LOADOPTS to refer to the loader and # desired load options for your machine. # FORTRAN = gfortran OPTS = -O2 -fPIC DRVOPTS = $(OPTS) NOOPT = -O0 -fPIC LOADER = gfortran LOADOPTS = # # Timer for the SECOND and DSECND routines # # Default : SECOND and DSECND will use a call to the EXTERNAL FUNCTION ETIME #TIMER = EXT_ETIME # For RS6K : SECOND and DSECND will use a call to the EXTERNAL FUNCTION ETIME_ # TIMER = EXT_ETIME_ # For gfortran compiler: SECOND and DSECND will use a call to the INTERNAL FUNCTION ETIME TIMER = INT_ETIME # If your Fortran compiler does not provide etime (like Nag Fortran Compiler, etc...) # SECOND and DSECND will use a call to the INTERNAL FUNCTION CPU_TIME # TIMER = INT_CPU_TIME # If neither of this works...you can use the NONE value... In that case, SECOND and DSECND will always return 0 # TIMER = NONE # # The archiver and the flag(s) to use when building archive (library) # If you system has no ranlib, set RANLIB = echo. # ARCH = ar ARCHFLAGS= cr RANLIB = ranlib LAPACKLIB = liblapack_pic. And then, to build atlas: ./configure -C if gfortran -Fa alg -fPIC --with-netlib-lapack=PATH_TO_LAPACK/liblapack_pic.a But really, you have to think about the pain to build and make sure it works with the time you gain by having a faster atlas. I bet that for the time you need to make it work, you could have inverted millions of big matrices :) cheers, David From c.j.lee at tnw.utwente.nl Sat May 17 05:22:25 2008 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Sat, 17 May 2008 11:22:25 +0200 Subject: [SciPy-user] Mac OS X and mkl and modifying site.cfg Message-ID: <91F2E99A-BCDA-457F-AE2E-26E9989CBBD2@tnw.utwente.nl> Hi Everyone, I sent this message earlier in the week, but it didn't seem to go through, so I am resending it. I know this question has been asked before but I can't seem to find the answer and certainly what I am doing is not working. I want to link my numpy install to the MKL libraries on Mac OS X (10.5) I have installed the mkl libraries and they have ended up buried in / Libraries/frameworks... I have edited the site.cfg package to read: [DEFAULT] library_dirs = /usr/local/lib:/opt/intel/fc/10.1.014/lib include_dirs = /usr/local/include:/opt/intel/fc/10.1.014/include [mkl] library_dirs = /Library/Frameworks/Intel_MKL.framework/Libraries/ universal mkl_libs = mkl, vml include_dirs = /Library/Frameworks/Intel_MKL.framework/Headers Python seems to find the intel fortran compiler all right but it doesn't appear to even look for the mkl libraries I have also tried other variations on this but it seems that the site.cfg file is ignored by setup.py When I looked in setup.py, I noticed that configuration expects to add site.cfg.example. I changed that to site.cfg but it didn't make any difference. If anyone knows what the site.cfg file should look like for a default Mac OS X install with mkl, would they please let me know? Thank you all very much. Cheers Chris *************************************************** Chris Lee Laser Physics and Nonlinear Optics Group MESA+ Research Institute for Nanotechnology University of Twente Phone: ++31 (0)53 489 3968 fax: ++31 (0)53 489 1102 *************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat May 17 05:25:21 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 17 May 2008 04:25:21 -0500 Subject: [SciPy-user] Mac OS X and mkl and modifying site.cfg In-Reply-To: <91F2E99A-BCDA-457F-AE2E-26E9989CBBD2@tnw.utwente.nl> References: <91F2E99A-BCDA-457F-AE2E-26E9989CBBD2@tnw.utwente.nl> Message-ID: <3d375d730805170225r473da898y98927235419927d7@mail.gmail.com> On Sat, May 17, 2008 at 4:22 AM, Chris Lee wrote: > Hi Everyone, > I sent this message earlier in the week, but it didn't seem to go through, > so I am resending it. It got through, and I responded to it. http://projects.scipy.org/pipermail/scipy-user/2008-May/016765.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Sat May 17 05:15:52 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 17 May 2008 18:15:52 +0900 Subject: [SciPy-user] Mac OS X and mkl and modifying site.cfg In-Reply-To: <91F2E99A-BCDA-457F-AE2E-26E9989CBBD2@tnw.utwente.nl> References: <91F2E99A-BCDA-457F-AE2E-26E9989CBBD2@tnw.utwente.nl> Message-ID: <482EA248.5050002@ar.media.kyoto-u.ac.jp> Chris Lee wrote: > Hi Everyone, > > I sent this message earlier in the week, but it didn't seem to go > through, so I am resending it. > > I know this question has been asked before but I can't seem to find > the answer and certainly what I am doing is not working. > > I want to link my numpy install to the MKL libraries on Mac OS X (10.5) By link, you mean rebuilding numpy and scipy with MKL, right ? The configuration file should certainly be named site.cfg, otherwise, it won't be taken into account. What does the following return : python setup.py config ? Note that for MKL 10, you will have to tell numpy to change the name for MKL as well (with lapack_libs something, it is mentioned in the site.cfg.example). I don't know why Intel thinks it is a good idea to change the library name every release, it is extremely annoying, cheers, David From ryanlists at gmail.com Sat May 17 09:23:48 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 17 May 2008 08:23:48 -0500 Subject: [SciPy-user] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: References: Message-ID: This is very good news. I will find some way to get involved. Ryan On Sat, May 17, 2008 at 12:45 AM, Joe Harrington wrote: > NUMPY/SCIPY DOCUMENTATION MARATHON 2008 > > As we all know, the state of the numpy and scipy reference > documentation (aka the docstrings) is best described as "incomplete". > Most functions have docstrings shorter than 5 lines, whereas our > competitors IDL and Matlab usually have a concise and well-written > page or two per function. The (wonderful) categorized list of > functions is very new and isn't included in the package yet. There > isn't even a "Getting Started"-type of document you can hand a new > user so they can dive right in. Documentation tools are limited to > plain-text paginators, while our competition enjoys HTML-based > documents with formulae, images, search capability, and cross linking. > > Tales of woe abound. A university class switched to Numpy and got > hopelessly bogged down because students couldn't find out how to call > the functions. A developer looked something up while giving a > presentation and the words "Blah, Blah, Blah" stared down at the > audience in response. > > To head off another pedagogical meltdown, the University of Central > Florida has hired Stefan van der Walt full time to coordinate a > community documentation effort to write reference documentation and > tools. The project starts now and continues through the summer. The > goals: > > 1. Produce complete docstrings for all numpy functions and as much of > scipy as possible, > > 2. Produce an 8-15 page Getting Started tutorial that is not > discipline-specific, > > 3. Write reference sections on topics in numpy, such as slicing and > the use principles of the modules, > > 4. Complete a first edition, in both PDF and HTML, of a NumPy > Reference Manual, and > > 5. Check everything into the sources by 1 August 2008 so that the > Packaging Team can cut a release and have it available in time for > Fall 2008 classes. > > Even Stefan could not document the hundreds of functions that need it > by himself, and in any case such a large contribution requires > community review. To make it easy for everyone to contribute, Pauli > Virtanen and Emmanuelle Guillart have provided a wiki system for > editing reference documentation. The idea was developed by Fernando > Perez, Stefan, and Gael Varoquaux. We encourage community members to > write, review, and proofread reference pages on this wiki. Stefan > will check updates into the sources roughly weekly. Near the end of > the project, we will put these wiki pages through a vetting process > and then check them into the sources a final time for a release > hopefully to occur in early August. > > Meanwhile, Perry Greenfield has taken the lead on on task 3, writing > reference docs for things that currently don't have docstrings, such > as basic concepts like slicing. > > We have proposed two small extensions to the current docstring format, > for images (to be used sparingly) and indexing. These appear in > updated versions of the doc standard, which are linked from the wiki > frontpage. Please take a look and comment on these if you like. All > docstrings will remain readable in plain text, but we are now > generating a full reference guide in PDF and HTML (you guessed it, > linked from the wiki). These are searchable formats. > > There are several ways you can help: > > 1. Write some docstrings on the wiki! Many people can do this, many > more than can write code for the package itself. However, you must > know numpy, the function group, and the function you are writing well. > You should be familiar with the concept of a reference page and write > in that concise style. We'll do tutorial docs in another project at a > later date. See the instructions on the wiki for guidelines and > format. > > 2. Review others' docstrings and leave comments on their wiki pages. > > 3. Proofread docstrings. Make sure they are correct, complete, and > concise. Fix grammar. > > 4. Write examples ("doctests"). Even if you are not a top-notch > English writer, you can help by producing a code snippet of a few > lines that demonstrates a function. It is fine for them to go into > the docstring templates before the actual text. > > 5. Write a new help function that optionally produces ASCII or points > the user's PDF or HTML reader to the right page (either local or > global). > > 6. If you are in a position to hire someone, such as a knowledgeable > student or short-term consultant, hire them to work on the tasks above > for the summer. We can provide supervision to them or guidance to you > if you like. > > The home for this project is here: > > http://scipy.org/Developer_Zone/DocMarathon2008 > > This is not a sprint. It is a marathon, and this time we are going to > finish. We hope you will join us! > > --jh-- and Stefan and Perry and Pauli and Emmanuelle...and you! > Joe Harrington > Stefan van der Walt > Perry Greenfield > Pauli Virtanen > Emmanuelle Guillart > ...and you! > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From jh at physics.ucf.edu Sat May 17 10:22:31 2008 From: jh at physics.ucf.edu (Joe Harrington) Date: Sat, 17 May 2008 10:22:31 -0400 Subject: [SciPy-user] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: (ryanlists@gmail.com) References: Message-ID: Ryan writes: > This is very good news. I will find some way to get involved. Great! Please dive right in, and sign up on the Developer_Zone page so we can keep track of who's involved. One thing I forgot to mention in my too-wordy announcement was that discussion of documentation is on the scipy-dev mailing list. We had to pick one spot and decided that since we are going after scipy as soon as numpy is done, we'd like to use that list rather than numpy-discussion. We also wanted to keep it on a development list rather than polluting the new users' discussion space. --jh-- From xavier.gnata at gmail.com Sat May 17 14:13:13 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sat, 17 May 2008 20:13:13 +0200 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <482E9612.3040009@ar.media.kyoto-u.ac.jp> References: <482DFDB3.9040909@gmail.com> <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> <482E0373.4060809@gmail.com> <3d375d730805161515i7c517bb2nf32fa0a2890197b3@mail.gmail.com> <482E11F6.9060401@gmail.com> <482E9612.3040009@ar.media.kyoto-u.ac.jp> Message-ID: <482F2039.7020403@gmail.com> David Cournapeau wrote: > Xavier Gnata wrote: > >>> Not sure about this one. Probably a real bug. >>> > > This error is often seen when mixing g77/gfortran. hardy still uses g77 > as the default fortran compiler, so it is easy to make a mistake there. > > >> Thanks for our support :) >> I used to use the atlas unbutu package but I would like to understand >> how to compile ATLAS and to see if there is a real performance >> improvement or not. As you said,it should note be that large... >> > > It is large if you are using a recent core 2 duo: the atlas package > (sse2) are built for pentium4 which has a deficient L1 cache, whereas > core 2 duo has much better behaviour. For large matrices, this can be > significant. > > How did you build lapack ? From lapack 3.1.1, you only need to use this > as the make.inc: > > #################################################################### > # LAPACK make include file. # > # LAPACK, Version 3.1.1 # > # February 2007 # > #################################################################### > # > SHELL = /bin/sh > # > # The machine (platform) identifier to append to the library names > # > PLAT = _LINUX > # > # Modify the FORTRAN and OPTS definitions to refer to the > # compiler and desired compiler options for your machine. NOOPT > # refers to the compiler options desired when NO OPTIMIZATION is > # selected. Define LOADER and LOADOPTS to refer to the loader and > # desired load options for your machine. > # > FORTRAN = gfortran > OPTS = -O2 -fPIC > DRVOPTS = $(OPTS) > NOOPT = -O0 -fPIC > LOADER = gfortran > LOADOPTS = > # > # Timer for the SECOND and DSECND routines > # > # Default : SECOND and DSECND will use a call to the EXTERNAL FUNCTION ETIME > #TIMER = EXT_ETIME > # For RS6K : SECOND and DSECND will use a call to the EXTERNAL FUNCTION > ETIME_ > # TIMER = EXT_ETIME_ > # For gfortran compiler: SECOND and DSECND will use a call to the > INTERNAL FUNCTION ETIME > TIMER = INT_ETIME > # If your Fortran compiler does not provide etime (like Nag Fortran > Compiler, etc...) > # SECOND and DSECND will use a call to the INTERNAL FUNCTION CPU_TIME > # TIMER = INT_CPU_TIME > # If neither of this works...you can use the NONE value... In that case, > SECOND and DSECND will always return 0 > # TIMER = NONE > # > # The archiver and the flag(s) to use when building archive (library) > # If you system has no ranlib, set RANLIB = echo. > # > ARCH = ar > ARCHFLAGS= cr > RANLIB = ranlib > > LAPACKLIB = liblapack_pic. > > And then, to build atlas: > > ./configure -C if gfortran -Fa alg -fPIC > --with-netlib-lapack=PATH_TO_LAPACK/liblapack_pic.a > > But really, you have to think about the pain to build and make sure it > works with the time you gain by having a faster atlas. I bet that for > the time you need to make it work, you could have inverted millions of > big matrices :) > > cheers, > > David > _______________________________________________ > Ok I give up and I use the ubuntu packages. I hope that the documentation project will provide us with a nice "how to install scipy without errors". btw, this odcumentation project is the best news of the day ;) Xavier From dwf at cs.toronto.edu Sat May 17 21:48:22 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sat, 17 May 2008 21:48:22 -0400 Subject: [SciPy-user] optimizing numpy on Linux? Message-ID: Hi, I'm going to be deploying an application that makes extensive use of numpy and scipy on a custom-built quad core machine running Linux. We're at the moment running Ubuntu Server edition. I've installed atlas3-base and the header packages via apt, and am about to start compiling numpy and scipy from source. The trouble is, I'd like it to be as fast as humanly possible, and don't know the first thing about tweaking a Linux system for that. Is it a matter of compiling and installing an optimized BLAS "by hand"? Are there any strategies for benchmarking a numpy install? Thanks, David From robert.kern at gmail.com Sat May 17 22:06:17 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 17 May 2008 21:06:17 -0500 Subject: [SciPy-user] optimizing numpy on Linux? In-Reply-To: References: Message-ID: <3d375d730805171906n248cbfefxd8a0bedf2ee627ed@mail.gmail.com> On Sat, May 17, 2008 at 8:48 PM, David Warde-Farley wrote: > Hi, > > I'm going to be deploying an application that makes extensive use of > numpy and scipy on a custom-built quad core machine running Linux. > We're at the moment running Ubuntu Server edition. > > I've installed atlas3-base and the header packages via apt, and am > about to start compiling numpy and scipy from source. The trouble is, > I'd like it to be as fast as humanly possible, and don't know the > first thing about tweaking a Linux system for that. Is it a matter of > compiling and installing an optimized BLAS "by hand"? If your application heavily uses linear algebra operations, yes, this would help. In particular, the ATLAS in Ubuntu is fairly old and is missing many optimizations for newer CPUs (in addition to being a generic build that won't take advantage of your precise configuration). However, in another thread, Xavier Gnata has been having trouble doing exactly this. I haven't built numpy against a recent ATLAS myself for some time, so I don't know what the problem is. If you get it working, please let him know how you did it and update the wiki page for installing on Linux. > Are there any > strategies for benchmarking a numpy install? Find the hotspots in your application through profiling first, identify non-numpy things that you could do to speed those up, identify the numpy operations that affect the remaining hotspots, then write benchmarks targeting those specific features. If these operations are very small (a[0]), then the timeit module is fairly useful. If the operations are large (> 1s) or use a lot of memory (linalg.solve(bigmatrix)), don't use timeit; only do one execution instead of looping. IPython has some good utility functions for timing such functions using resource.getrusage() which will record just the time your actual code spent on the CPU rather than the wall-clock time between when you started and ended. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cohen at slac.stanford.edu Sun May 18 04:58:32 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Sun, 18 May 2008 10:58:32 +0200 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <482F2039.7020403@gmail.com> References: <482DFDB3.9040909@gmail.com> <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> <482E0373.4060809@gmail.com> <3d375d730805161515i7c517bb2nf32fa0a2890197b3@mail.gmail.com> <482E11F6.9060401@gmail.com> <482E9612.3040009@ar.media.kyoto-u.ac.jp> <482F2039.7020403@gmail.com> Message-ID: <482FEFB8.2090206@slac.stanford.edu> well, I am not sure the doc marathon will deal with building scipy, at least as a priority.... This is going to remain a very tough issue because scipy depends on packages that are complex and hard to build. Note that if you dont need smart lin. algebra you still have plenty of good stuff in scipy despite this lapack failure. Maybe what would be nice is for the scipy build process to clearly disable some specific libs if something goes wrong and report it at the end of the build. Then when testing likewise it would disable the corresponding test functions.... yes this is complete vaporware I know.... Now on the specific problem you have, I don't completely recall now but when I was struggling with compiling lapack one of the woes I was facing was a mismatch of fortran compilers, so can you make sure in the python setup.py config that you are using gfortran all the way through, and not g77 in any step of the builds of either lapack/blas/scipy? well my 2 cents, HTH, Johann Xavier Gnata wrote: > David Cournapeau wrote: > >> Xavier Gnata wrote: >> >> >>>> Not sure about this one. Probably a real bug. >>>> >>>> >> This error is often seen when mixing g77/gfortran. hardy still uses g77 >> as the default fortran compiler, so it is easy to make a mistake there. >> >> >> >>> Thanks for our support :) >>> I used to use the atlas unbutu package but I would like to understand >>> how to compile ATLAS and to see if there is a real performance >>> improvement or not. As you said,it should note be that large... >>> >>> >> It is large if you are using a recent core 2 duo: the atlas package >> (sse2) are built for pentium4 which has a deficient L1 cache, whereas >> core 2 duo has much better behaviour. For large matrices, this can be >> significant. >> >> How did you build lapack ? From lapack 3.1.1, you only need to use this >> as the make.inc: >> >> #################################################################### >> # LAPACK make include file. # >> # LAPACK, Version 3.1.1 # >> # February 2007 # >> #################################################################### >> # >> SHELL = /bin/sh >> # >> # The machine (platform) identifier to append to the library names >> # >> PLAT = _LINUX >> # >> # Modify the FORTRAN and OPTS definitions to refer to the >> # compiler and desired compiler options for your machine. NOOPT >> # refers to the compiler options desired when NO OPTIMIZATION is >> # selected. Define LOADER and LOADOPTS to refer to the loader and >> # desired load options for your machine. >> # >> FORTRAN = gfortran >> OPTS = -O2 -fPIC >> DRVOPTS = $(OPTS) >> NOOPT = -O0 -fPIC >> LOADER = gfortran >> LOADOPTS = >> # >> # Timer for the SECOND and DSECND routines >> # >> # Default : SECOND and DSECND will use a call to the EXTERNAL FUNCTION ETIME >> #TIMER = EXT_ETIME >> # For RS6K : SECOND and DSECND will use a call to the EXTERNAL FUNCTION >> ETIME_ >> # TIMER = EXT_ETIME_ >> # For gfortran compiler: SECOND and DSECND will use a call to the >> INTERNAL FUNCTION ETIME >> TIMER = INT_ETIME >> # If your Fortran compiler does not provide etime (like Nag Fortran >> Compiler, etc...) >> # SECOND and DSECND will use a call to the INTERNAL FUNCTION CPU_TIME >> # TIMER = INT_CPU_TIME >> # If neither of this works...you can use the NONE value... In that case, >> SECOND and DSECND will always return 0 >> # TIMER = NONE >> # >> # The archiver and the flag(s) to use when building archive (library) >> # If you system has no ranlib, set RANLIB = echo. >> # >> ARCH = ar >> ARCHFLAGS= cr >> RANLIB = ranlib >> >> LAPACKLIB = liblapack_pic. >> >> And then, to build atlas: >> >> ./configure -C if gfortran -Fa alg -fPIC >> --with-netlib-lapack=PATH_TO_LAPACK/liblapack_pic.a >> >> But really, you have to think about the pain to build and make sure it >> works with the time you gain by having a faster atlas. I bet that for >> the time you need to make it work, you could have inverted millions of >> big matrices :) >> >> cheers, >> >> David >> _______________________________________________ >> >> > > Ok I give up and I use the ubuntu packages. > I hope that the documentation project will provide us with a nice "how > to install scipy without errors". > btw, this odcumentation project is the best news of the day ;) > > Xavier > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From david at ar.media.kyoto-u.ac.jp Sun May 18 05:02:30 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 18 May 2008 18:02:30 +0900 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <482FEFB8.2090206@slac.stanford.edu> References: <482DFDB3.9040909@gmail.com> <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> <482E0373.4060809@gmail.com> <3d375d730805161515i7c517bb2nf32fa0a2890197b3@mail.gmail.com> <482E11F6.9060401@gmail.com> <482E9612.3040009@ar.media.kyoto-u.ac.jp> <482F2039.7020403@gmail.com> <482FEFB8.2090206@slac.stanford.edu> Message-ID: <482FF0A6.2040201@ar.media.kyoto-u.ac.jp> Johann Cohen-Tanugi wrote: > well, I am not sure the doc marathon will deal with building scipy, at > least as a priority.... Yes, I agree. I think too many people try to build all the dependencies by themselves, and are surprised it is difficult. Frankly, I almost think it would be good to have less documentation, as an 'anti-incentive'. Building software which rely on binaries linked with several languages is just something inherently difficult and requires a lot of knowledge for the platform. The fact that ATLAS, BLAS and LAPACK use non standard build procedures certainly does not help either. > This is going to remain a very tough issue > because scipy depends on packages that are complex and hard to build. > Note that if you dont need smart lin. algebra you still have plenty of > good stuff in scipy despite this lapack failure. Maybe what would be > nice is for the scipy build process to clearly disable some specific > libs if something goes wrong and report it at the end of the build. That's one of the reason I worked on numscons: in numscons, all dependencies are actually checked, contrary to numpy.distutils which just checks for the *presence* of a dependency. For example, if your atlas is somewhat buggy and cannot be linked, numscons detects it because before using atlas, it tries to build/run a small program which uses atlas, and this is logged. cheers, David From cohen at slac.stanford.edu Sun May 18 05:21:49 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Sun, 18 May 2008 11:21:49 +0200 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <482FF0A6.2040201@ar.media.kyoto-u.ac.jp> References: <482DFDB3.9040909@gmail.com> <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> <482E0373.4060809@gmail.com> <3d375d730805161515i7c517bb2nf32fa0a2890197b3@mail.gmail.com> <482E11F6.9060401@gmail.com> <482E9612.3040009@ar.media.kyoto-u.ac.jp> <482F2039.7020403@gmail.com> <482FEFB8.2090206@slac.stanford.edu> <482FF0A6.2040201@ar.media.kyoto-u.ac.jp> Message-ID: <482FF52D.1080402@slac.stanford.edu> Salut! Super... je m'etais promis de tester ton setup numscons, et bien entendu j'ai pas trouv? le temps ;) Une raison de plus pour essayer, mais le marathon d'abord! Johann David Cournapeau wrote: > Johann Cohen-Tanugi wrote: > >> well, I am not sure the doc marathon will deal with building scipy, at >> least as a priority.... >> > > Yes, I agree. I think too many people try to build all the dependencies > by themselves, and are surprised it is difficult. Frankly, I almost > think it would be good to have less documentation, as an > 'anti-incentive'. Building software which rely on binaries linked with > several languages is just something inherently difficult and requires a > lot of knowledge for the platform. The fact that ATLAS, BLAS and LAPACK > use non standard build procedures certainly does not help either. > > >> This is going to remain a very tough issue >> because scipy depends on packages that are complex and hard to build. >> Note that if you dont need smart lin. algebra you still have plenty of >> good stuff in scipy despite this lapack failure. Maybe what would be >> nice is for the scipy build process to clearly disable some specific >> libs if something goes wrong and report it at the end of the build. >> > > That's one of the reason I worked on numscons: in numscons, all > dependencies are actually checked, contrary to numpy.distutils which > just checks for the *presence* of a dependency. For example, if your > atlas is somewhat buggy and cannot be linked, numscons detects it > because before using atlas, it tries to build/run a small program which > uses atlas, and this is logged. > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From cohen at slac.stanford.edu Sun May 18 06:04:45 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Sun, 18 May 2008 12:04:45 +0200 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <482FF52D.1080402@slac.stanford.edu> References: <482DFDB3.9040909@gmail.com> <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> <482E0373.4060809@gmail.com> <3d375d730805161515i7c517bb2nf32fa0a2890197b3@mail.gmail.com> <482E11F6.9060401@gmail.com> <482E9612.3040009@ar.media.kyoto-u.ac.jp> <482F2039.7020403@gmail.com> <482FEFB8.2090206@slac.stanford.edu> <482FF0A6.2040201@ar.media.kyoto-u.ac.jp> <482FF52D.1080402@slac.stanford.edu> Message-ID: <482FFF3D.6000202@slac.stanford.edu> ooops that happens to me once in a while. This was for David's eyes only ;) J. Johann Cohen-Tanugi wrote: > Salut! > Super... je m'etais promis de tester ton setup numscons, et bien entendu > j'ai pas trouv? le temps ;) Une raison de plus pour essayer, mais le > marathon d'abord! > Johann > > David Cournapeau wrote: > >> Johann Cohen-Tanugi wrote: >> >> >>> well, I am not sure the doc marathon will deal with building scipy, at >>> least as a priority.... >>> >>> >> Yes, I agree. I think too many people try to build all the dependencies >> by themselves, and are surprised it is difficult. Frankly, I almost >> think it would be good to have less documentation, as an >> 'anti-incentive'. Building software which rely on binaries linked with >> several languages is just something inherently difficult and requires a >> lot of knowledge for the platform. The fact that ATLAS, BLAS and LAPACK >> use non standard build procedures certainly does not help either. >> >> >> >>> This is going to remain a very tough issue >>> because scipy depends on packages that are complex and hard to build. >>> Note that if you dont need smart lin. algebra you still have plenty of >>> good stuff in scipy despite this lapack failure. Maybe what would be >>> nice is for the scipy build process to clearly disable some specific >>> libs if something goes wrong and report it at the end of the build. >>> >>> >> That's one of the reason I worked on numscons: in numscons, all >> dependencies are actually checked, contrary to numpy.distutils which >> just checks for the *presence* of a dependency. For example, if your >> atlas is somewhat buggy and cannot be linked, numscons detects it >> because before using atlas, it tries to build/run a small program which >> uses atlas, and this is logged. >> >> cheers, >> >> David >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From xavier.gnata at gmail.com Sun May 18 07:15:30 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sun, 18 May 2008 13:15:30 +0200 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <482FF0A6.2040201@ar.media.kyoto-u.ac.jp> References: <482DFDB3.9040909@gmail.com> <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> <482E0373.4060809@gmail.com> <3d375d730805161515i7c517bb2nf32fa0a2890197b3@mail.gmail.com> <482E11F6.9060401@gmail.com> <482E9612.3040009@ar.media.kyoto-u.ac.jp> <482F2039.7020403@gmail.com> <482FEFB8.2090206@slac.stanford.edu> <482FF0A6.2040201@ar.media.kyoto-u.ac.jp> Message-ID: <48300FD2.9090308@gmail.com> Well from my point of view, it is much harder compile scipy than to compile gcc or a kernel because it looks like black magic :( To be very pragmatic, I do agree on this statement : "it would be good to have less documentation, as an 'anti-incentive'." but if you want the svn versions to be tested you need also a doc for "experts" users. I do like to try to fix a bug in a numerical algo but first I would like to be able to install the svn scipy version and to run the scipy.test() without errors coming from the installation itself. "Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix will be obvious to someone". Of course it is not that esay because scipy is based on a mix of C/fortran old (but nice) libs... Cheers, Xavier > Johann Cohen-Tanugi wrote: > >> well, I am not sure the doc marathon will deal with building scipy, at >> least as a priority.... >> > > Yes, I agree. I think too many people try to build all the dependencies > by themselves, and are surprised it is difficult. Frankly, I almost > think it would be good to have less documentation, as an > 'anti-incentive'. Building software which rely on binaries linked with > several languages is just something inherently difficult and requires a > lot of knowledge for the platform. The fact that ATLAS, BLAS and LAPACK > use non standard build procedures certainly does not help either. > > >> This is going to remain a very tough issue >> because scipy depends on packages that are complex and hard to build. >> Note that if you dont need smart lin. algebra you still have plenty of >> good stuff in scipy despite this lapack failure. Maybe what would be >> nice is for the scipy build process to clearly disable some specific >> libs if something goes wrong and report it at the end of the build. >> > > That's one of the reason I worked on numscons: in numscons, all > dependencies are actually checked, contrary to numpy.distutils which > just checks for the *presence* of a dependency. For example, if your > atlas is somewhat buggy and cannot be linked, numscons detects it > because before using atlas, it tries to build/run a small program which > uses atlas, and this is logged. > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From david at ar.media.kyoto-u.ac.jp Sun May 18 07:16:25 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 18 May 2008 20:16:25 +0900 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <48300FD2.9090308@gmail.com> References: <482DFDB3.9040909@gmail.com> <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> <482E0373.4060809@gmail.com> <3d375d730805161515i7c517bb2nf32fa0a2890197b3@mail.gmail.com> <482E11F6.9060401@gmail.com> <482E9612.3040009@ar.media.kyoto-u.ac.jp> <482F2039.7020403@gmail.com> <482FEFB8.2090206@slac.stanford.edu> <482FF0A6.2040201@ar.media.kyoto-u.ac.jp> <48300FD2.9090308@gmail.com> Message-ID: <48301009.6080204@ar.media.kyoto-u.ac.jp> Xavier Gnata wrote: > Well from my point of view, it is much harder compile scipy than to > compile gcc or a kernel because it looks like black magic :( > No, it's not. Really. On a new debian/Ubuntu machine: sudo apt-get install g++ gcc g77 atlas3-base-dev python-dev And then, you can build numpy/scipy, and it will work 100 % of the time. What is difficult is to build the dependencies, and people try to build them by themselves, or in "weird" configurations. Try building the kernel with ICC, or try building gcc with a libc which is not glibc. For that matter, try to build any software from scratch, and you will see it is not easier. I think we should make it clearer on scipy install pages that you should really use the packaged libraries when available. Before, major distribution had broken blas/lapack, but those days are mostly over, cheers, David From cohen at slac.stanford.edu Sun May 18 07:31:55 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Sun, 18 May 2008 13:31:55 +0200 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <48301009.6080204@ar.media.kyoto-u.ac.jp> References: <482DFDB3.9040909@gmail.com> <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> <482E0373.4060809@gmail.com> <3d375d730805161515i7c517bb2nf32fa0a2890197b3@mail.gmail.com> <482E11F6.9060401@gmail.com> <482E9612.3040009@ar.media.kyoto-u.ac.jp> <482F2039.7020403@gmail.com> <482FEFB8.2090206@slac.stanford.edu> <482FF0A6.2040201@ar.media.kyoto-u.ac.jp> <48300FD2.9090308@gmail.com> <48301009.6080204@ar.media.kyoto-u.ac.jp> Message-ID: <483013AB.3@slac.stanford.edu> last time I checked the Fedora blas/lapack distributions were still flawed.... Did that change for Fedora 9? Johann David Cournapeau wrote: > Xavier Gnata wrote: > >> Well from my point of view, it is much harder compile scipy than to >> compile gcc or a kernel because it looks like black magic :( >> >> > > No, it's not. Really. On a new debian/Ubuntu machine: > > sudo apt-get install g++ gcc g77 atlas3-base-dev python-dev > > And then, you can build numpy/scipy, and it will work 100 % of the time. > What is difficult is to build the dependencies, and people try to build > them by themselves, or in "weird" configurations. > > Try building the kernel with ICC, or try building gcc with a libc which > is not glibc. For that matter, try to build any software from scratch, > and you will see it is not easier. > > I think we should make it clearer on scipy install pages that you should > really use the packaged libraries when available. Before, major > distribution had broken blas/lapack, but those days are mostly over, > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From xavier.gnata at gmail.com Sun May 18 08:02:24 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sun, 18 May 2008 14:02:24 +0200 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <48301009.6080204@ar.media.kyoto-u.ac.jp> References: <482DFDB3.9040909@gmail.com> <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> <482E0373.4060809@gmail.com> <3d375d730805161515i7c517bb2nf32fa0a2890197b3@mail.gmail.com> <482E11F6.9060401@gmail.com> <482E9612.3040009@ar.media.kyoto-u.ac.jp> <482F2039.7020403@gmail.com> <482FEFB8.2090206@slac.stanford.edu> <482FF0A6.2040201@ar.media.kyoto-u.ac.jp> <48300FD2.9090308@gmail.com> <48301009.6080204@ar.media.kyoto-u.ac.jp> Message-ID: <48301AD0.80700@gmail.com> Indeed, scipy is very easy to compile when the lib are installed. well atlas using the package is ok. umfpack is not on my ubuntu. Yes I'm able to compile scipy but it complains that umfpack (or libsparesuite...) is missing when I run the scipy.test(). Maybe the scipy.test() should clearly split the errors in two groups : One group including "real" bugs and one reporting that libs a missing but that it is *not* a problem if you do not plan to use this part of scipy I do agree that the doc should tell the user to use the .deb and I think all this stuff be just vanish when gfortran will be the default compiler. To tell you the story : I have installed gcc-4.3 on my hardy to be able to use openmp with scipy.weave.inline (it works just fine for instance to implement a TOTAL idl like function. A bug in gcc-4.2 prevent this to work.). I do hope that the next release of ubuntu will be based gcc-4.3 and on gfortran *and* that they will provide us this nice atlas packages. I must admit that I'm a user working "only" on large arrays so it is a kind of a corner case ;) Cheers, Xavier ps : I'm not sure but I don't think you can compile the kernel using icc ;). the kernel is written in gcc qnd not in C :) > Xavier Gnata wrote: > >> Well from my point of view, it is much harder compile scipy than to >> compile gcc or a kernel because it looks like black magic :( >> >> > > No, it's not. Really. On a new debian/Ubuntu machine: > > sudo apt-get install g++ gcc g77 atlas3-base-dev python-dev > > And then, you can build numpy/scipy, and it will work 100 % of the time. > What is difficult is to build the dependencies, and people try to build > them by themselves, or in "weird" configurations. > > Try building the kernel with ICC, or try building gcc with a libc which > is not glibc. For that matter, try to build any software from scratch, > and you will see it is not easier. > > I think we should make it clearer on scipy install pages that you should > really use the packaged libraries when available. Before, major > distribution had broken blas/lapack, but those days are mostly over, > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From mabshoff at googlemail.com Sun May 18 07:31:44 2008 From: mabshoff at googlemail.com (Michael Abshoff) Date: Sun, 18 May 2008 13:31:44 +0200 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <48301AD0.80700@gmail.com> References: <482DFDB3.9040909@gmail.com> <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> <482E0373.4060809@gmail.com> <3d375d730805161515i7c517bb2nf32fa0a2890197b3@mail.gmail.com> <482E11F6.9060401@gmail.com> <482E9612.3040009@ar.media.kyoto-u.ac.jp> <482F2039.7020403@gmail.com> <482FEFB8.2090206@slac.stanford.edu> <482FF0A6.2040201@ar.media.kyoto-u.ac.jp> <48300FD2.9090308@gmail.com> <48301009.6080204@ar.media.kyoto-u.ac.jp> <48301AD0.80700@gmail.com> Message-ID: <483013A0.5000806@googlemail.com> Xavier Gnata wrote: Hi. > Indeed, scipy is very easy to compile when the lib are installed. > > > well atlas using the package is ok. > umfpack is not on my ubuntu. > Yes I'm able to compile scipy but it complains that umfpack (or > libsparesuite...) is missing when I run the scipy.test(). > > Maybe the scipy.test() should clearly split the errors in two groups : > One group including "real" bugs and one reporting that libs a missing > but that it is *not* a problem if you do not plan to use this part of scipy > > I do agree that the doc should tell the user to use the .deb and I think > all this stuff be just vanish when gfortran will be the default compiler. > > To tell you the story : I have installed gcc-4.3 on my hardy to be able > to use openmp with scipy.weave.inline (it works just fine for instance > to implement a TOTAL idl like function. A bug in gcc-4.2 prevent this to > work.). > I do hope that the next release of ubuntu will be based gcc-4.3 and on > gfortran *and* that they will provide us this nice atlas packages. > > Do you mean the 8.04 release? I was surprised to see that it shipped gcc 4.2.3. > I must admit that I'm a user working "only" on large arrays so it is a > kind of a corner case ;) > > Cheers, > Xavier > ps : I'm not sure but I don't think you can compile the kernel using icc > ;). the kernel is written in gcc qnd not in C :) > The Linux kernel can be compiled with the Intel C compiler since Intel uses it as a test case, i.e. Intel's compiler try to be as close to gcc as possible on Linux and MSVC on Windows. Cheers, Michae From robince at gmail.com Sun May 18 08:38:26 2008 From: robince at gmail.com (Robin) Date: Sun, 18 May 2008 13:38:26 +0100 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <48300FD2.9090308@gmail.com> References: <482DFDB3.9040909@gmail.com> <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> <482E0373.4060809@gmail.com> <3d375d730805161515i7c517bb2nf32fa0a2890197b3@mail.gmail.com> <482E11F6.9060401@gmail.com> <482E9612.3040009@ar.media.kyoto-u.ac.jp> <482F2039.7020403@gmail.com> <482FEFB8.2090206@slac.stanford.edu> <482FF0A6.2040201@ar.media.kyoto-u.ac.jp> <48300FD2.9090308@gmail.com> Message-ID: Hi, I wrote the Ubuntu install guide on the wiki. I'm sorry you're having so much trouble! I know most experts recommend using the system packages, but at the time I spent a lot of time trying that and couldn't get it to work - in the end this was the only way I had success which is why I put it on the wiki. Perhaps it is now easier to build with the distribution packages, but I still think it's useful to have the instructions on the wiki... From looking at your build log: On Sun, May 18, 2008 at 12:15 PM, Xavier Gnata wrote: > FOUND: > libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/lib'] > language = c > include_dirs = ['/usr/include'] It looks like you are picking up the libraries from /usr/lib - however I notice from this: > blas_opt_info: > blas_mkl_info: > libraries mkl,vml,guide not found in /usr/lib > libraries mkl,vml,guide not found in /usr/local/lib/scipy/lib > NOT AVAILABLE that you must have specified /usr/local/lib/scipy/lib somewhere in the site.cfg so I suspect the ATLAS you built yourself is there. So I think you probably have a system ATLAS installed which is the one being used from /usr/lib - this is built with g77 and since the rest of numpy/scipy is built with gfortran that could be where the problems are coming from. Try checking your site.cfg to get it to pick up the versions you built. (you could post your site.cfg file) The only other thing is that in between builds you need to manually delete /usr/lib/python2.5/site-packages/numpy and scipy and the build directory in the numpy and scipy source directories. I just ran through the install instructions on hardy with ATLAS 3.8.1 and latest svn (numpy 5188, scipy 4376), UMFPACK 5.2.0 and it seemed to work fine. I get no errors with numpy and 4 errors and 7 failures with scipy but I am pretty sure these are unrelated to the build. Cheers Robin From xavier.gnata at gmail.com Sun May 18 09:19:26 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sun, 18 May 2008 15:19:26 +0200 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: References: <482DFDB3.9040909@gmail.com> <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> <482E0373.4060809@gmail.com> <3d375d730805161515i7c517bb2nf32fa0a2890197b3@mail.gmail.com> <482E11F6.9060401@gmail.com> <482E9612.3040009@ar.media.kyoto-u.ac.jp> <482F2039.7020403@gmail.com> <482FEFB8.2090206@slac.stanford.edu> <482FF0A6.2040201@ar.media.kyoto-u.ac.jp> <48300FD2.9090308@gmail.com> Message-ID: <48302CDE.9070206@gmail.com> I think the tutorial is fine :) I also think I' just going to wait for the next version of ubuntu and I will try again with a clean install of imprepid ibex. Of course, I'm stil a svn user and the parts of scipy I really use are running just fine :) Michael : gcc version 4.2.3 (Ubuntu 4.2.3-2ubuntu7) but it is still buggy if you try to use openmp. xavier > Hi, > > I wrote the Ubuntu install guide on the wiki. I'm sorry you're having > so much trouble! I know most experts recommend using the system > packages, but at the time I spent a lot of time trying that and > couldn't get it to work - in the end this was the only way I had > success which is why I put it on the wiki. > > Perhaps it is now easier to build with the distribution packages, but > I still think it's useful to have the instructions on the wiki... > > From looking at your build log: > > On Sun, May 18, 2008 at 12:15 PM, Xavier Gnata wrote: > >> FOUND: >> libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] >> library_dirs = ['/usr/lib'] >> language = c >> include_dirs = ['/usr/include'] >> > > It looks like you are picking up the libraries from /usr/lib - however > I notice from this: > >> blas_opt_info: >> blas_mkl_info: >> libraries mkl,vml,guide not found in /usr/lib >> libraries mkl,vml,guide not found in /usr/local/lib/scipy/lib >> NOT AVAILABLE >> > > that you must have specified /usr/local/lib/scipy/lib somewhere in the > site.cfg so I suspect the ATLAS you built yourself is there. > So I think you probably have a system ATLAS installed which is the one > being used from /usr/lib - this is built with g77 and since the rest > of numpy/scipy is built with gfortran that could be where the problems > are coming from. > > Try checking your site.cfg to get it to pick up the versions you > built. (you could post your site.cfg file) > > The only other thing is that in between builds you need to manually > delete /usr/lib/python2.5/site-packages/numpy and scipy and the build > directory in the numpy and scipy source directories. > > I just ran through the install instructions on hardy with ATLAS 3.8.1 > and latest svn (numpy 5188, scipy 4376), UMFPACK 5.2.0 and it seemed > to work fine. > > I get no errors with numpy and 4 errors and 7 failures with scipy but > I am pretty sure these are unrelated to the build. > > Cheers > > Robin > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From osman at fuse.net Sun May 18 10:44:40 2008 From: osman at fuse.net (osman) Date: Sun, 18 May 2008 10:44:40 -0400 Subject: [SciPy-user] optimizing numpy on Linux? In-Reply-To: <3d375d730805171906n248cbfefxd8a0bedf2ee627ed@mail.gmail.com> References: <3d375d730805171906n248cbfefxd8a0bedf2ee627ed@mail.gmail.com> Message-ID: <1211121880.23437.9.camel@stargate.org> On Sat, 2008-05-17 at 21:06 -0500, Robert Kern wrote: > If your application heavily uses linear algebra operations, yes, this > would help. In particular, the ATLAS in Ubuntu is fairly old and is > missing many optimizations for newer CPUs (in addition to being a > generic build that won't take advantage of your precise > configuration). However, in another thread, Xavier Gnata has been > having trouble doing exactly this. I haven't built numpy against a > recent ATLAS myself for some time, so I don't know what the problem > is. If you get it working, please let him know how you did it and > update the wiki page for installing on Linux. I was able to install atlas on 64 bit feisty/gutsy AMD64. Trick is building everything yourself. I used gfortran and followed Robin's install help page: http://www.scipy.org/Installing_SciPy/Linux#head-1c4018a51422706809ee96a4db03ca0669f5f6d1 I had find and delete all of the old libs first. That turned out to be quite a task and most of troubles as I had put libs in /usr/lib /usr/local/lib /usr/home/osman/lib , 32 and 64 ... etc At the end I did have a working scipy with umfpack and sfepy. Thanks again to Robin. Regards, -osman From eads at soe.ucsc.edu Sun May 18 18:02:32 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Sun, 18 May 2008 15:02:32 -0700 Subject: [SciPy-user] threading and scipy Message-ID: <4830A778.4050609@soe.ucsc.edu> Hi there, I am running some of my code through a very large data set on quad-core cluster nodes. A simple grep confirms that most parts of Scipy (e.g. linalg) do not use the Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS macros (or the numpy equivalents). [eads at pumpkin scipy]$ grep ALLOW_THREADS `find ~/work/repo/scipy -name "*.[ch]*"` | grep -v "build" | sed 's/:.*//g' | sort | uniq /home/eads/work/repo/scipy/scipy/sandbox/netcdf/_netcdf.c /home/eads/work/repo/scipy/scipy/sandbox/netcdf/.svn/text-base/_netcdf.c.svn-base /home/eads/work/repo/scipy/scipy/stsci/convolve/src/_lineshapemodule.c /home/eads/work/repo/scipy/scipy/stsci/convolve/src/.svn/text-base/_lineshapemodule.c.svn-base Numpy seems to have a lot more coverage, though. [eads at pumpkin scipy]$ grep ALLOW_THREADS `find ~/work/numpy-1.0.4/numpy -name "*.c"` | sed 's/:.*//g' | sort | uniq /home/eads/work/numpy-1.0.4/numpy/core/blasdot/_dotblas.c /home/eads/work/numpy-1.0.4/numpy/core/src/arrayobject.c /home/eads/work/numpy-1.0.4/numpy/core/src/multiarraymodule.c [eads at pumpkin scipy]$ Is it true if my code is heavily dependent on Scipy (I do image processing on large images with ndimage) and I use the %bg command in IPython, most of the time there will be only one thread running with the others blocked? I anticipate others will insist that I read up on the caveats of multi-threaded programming (mutual exclusion, locking, critical regions, etc.) so I should mention that I am a pretty seasoned with it, having done quite a bit of work with pthreads. However, I am new to threading in python and I heard there are issues, specifically only one thread is allowed access to the global interpreter lock at a time. I would like to run some filters on 300 images. These filters change from one iteration to the next of the program. When all the filtering is finished, a single thread needs to see the result of all the computation (all the result images) to compute so inter-image statistics. Then, I start the process over. I'd really like to spawn 4+ threads, one each working on a different image. Being that I don't see any code in ndimage that releases the global interpreter lock, is it true that if I wrote code to spawn separate filter threads, only one would execute at a time? Please advise. Thank you! Damian From xavier.gnata at gmail.com Sun May 18 18:47:50 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Mon, 19 May 2008 00:47:50 +0200 Subject: [SciPy-user] threading and scipy In-Reply-To: <4830A778.4050609@soe.ucsc.edu> References: <4830A778.4050609@soe.ucsc.edu> Message-ID: <4830B216.2090705@gmail.com> Damian Eads wrote: > Hi there, > > I am running some of my code through a very large data set on quad-core > cluster nodes. A simple grep confirms that most parts of Scipy (e.g. > linalg) do not use the Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS > macros (or the numpy equivalents). > > [eads at pumpkin scipy]$ grep ALLOW_THREADS `find ~/work/repo/scipy -name > "*.[ch]*"` | grep -v "build" | sed 's/:.*//g' | sort | uniq > /home/eads/work/repo/scipy/scipy/sandbox/netcdf/_netcdf.c > /home/eads/work/repo/scipy/scipy/sandbox/netcdf/.svn/text-base/_netcdf.c.svn-base > /home/eads/work/repo/scipy/scipy/stsci/convolve/src/_lineshapemodule.c > /home/eads/work/repo/scipy/scipy/stsci/convolve/src/.svn/text-base/_lineshapemodule.c.svn-base > > Numpy seems to have a lot more coverage, though. > > [eads at pumpkin scipy]$ grep ALLOW_THREADS `find ~/work/numpy-1.0.4/numpy > -name "*.c"` | sed 's/:.*//g' | sort | uniq > /home/eads/work/numpy-1.0.4/numpy/core/blasdot/_dotblas.c > /home/eads/work/numpy-1.0.4/numpy/core/src/arrayobject.c > /home/eads/work/numpy-1.0.4/numpy/core/src/multiarraymodule.c > > [eads at pumpkin scipy]$ > > Is it true if my code is heavily dependent on Scipy (I do image > processing on large images with ndimage) and I use the %bg command in > IPython, most of the time there will be only one thread running with the > others blocked? > > I anticipate others will insist that I read up on the caveats of > multi-threaded programming (mutual exclusion, locking, critical regions, > etc.) so I should mention that I am a pretty seasoned with it, having > done quite a bit of work with pthreads. However, I am new to threading > in python and I heard there are issues, specifically only one thread is > allowed access to the global interpreter lock at a time. > > I would like to run some filters on 300 images. These filters change > from one iteration to the next of the program. When all the filtering is > finished, a single thread needs to see the result of all the computation > (all the result images) to compute so inter-image statistics. Then, I > start the process over. I'd really like to spawn 4+ threads, one each > working on a different image. > > Being that I don't see any code in ndimage that releases the global > interpreter lock, is it true that if I wrote code to spawn separate > filter threads, only one would execute at a time? > > Please advise. > > Thank you! > > Damian > Well if you have 300 images you should run let say 6 times the same python *script* (not thread) on 6 sets of 50 images. It is what we call " embarrassingly parallel". http://en.wikipedia.org/wiki/Embarrassingly_parallel No threads, no locks, no mutex :) Now, if now want to perform some explicit multi-threading computations in the numpy/scipy framework, you could try to insert C code + openmp when the performances matter. One very stupid example is : import scipy from scipy.weave import converters from scipy import zeros import pylab max = 100000 code = """ #include long i; double s=1; long k=10000; #pragma omp parallel for for (i=1; i References: <4830A778.4050609@soe.ucsc.edu> Message-ID: <3d375d730805181625q2a7902d9g8d40b38cdb35b091@mail.gmail.com> On Sun, May 18, 2008 at 5:02 PM, Damian Eads wrote: > Being that I don't see any code in ndimage that releases the global > interpreter lock, is it true that if I wrote code to spawn separate > filter threads, only one would execute at a time? Yes. You may be interested in using pyprocessing which implements an API like the threading module but uses subprocesses instead. http://pyprocessing.berlios.de/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Mon May 19 07:10:09 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 19 May 2008 13:10:09 +0200 Subject: [SciPy-user] 3D bar charts Message-ID: Hi there, is there a python tool to plot 3D bar charts ? Any pointer would be appreciated. Thanks in advance Nils From david at ar.media.kyoto-u.ac.jp Mon May 19 07:52:16 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 19 May 2008 20:52:16 +0900 Subject: [SciPy-user] optimizing numpy on Linux? In-Reply-To: <1211121880.23437.9.camel@stargate.org> References: <3d375d730805171906n248cbfefxd8a0bedf2ee627ed@mail.gmail.com> <1211121880.23437.9.camel@stargate.org> Message-ID: <483169F0.1000307@ar.media.kyoto-u.ac.jp> osman wrote: > > I was able to install atlas on 64 bit feisty/gutsy AMD64. Trick is > building everything yourself. I used gfortran and followed Robin's > install help page: If you use Hardy, you do not need to build everything by yourself, even if you use gfortran instead of g77 (but again, unless you really need gfortran, say for OpenMP, avoid it on ubuntu, since the ABI is still g77 and not gfortran). The trick is to install libatlas* (which uses gfortran compatible ABI) instead of atlas* (g77 ABI). > http://www.scipy.org/Installing_SciPy/Linux#head-1c4018a51422706809ee96a4db03ca0669f5f6d1 > > I had find and delete all of the old libs first. That turned out to be > quite a task and most of troubles as I had put libs > in /usr/lib /usr/local/lib /usr/home/osman/lib , 32 and 64 ... etc > At the end I did have a working scipy with umfpack and sfepy. One useful tool to avoid this kind of trouble is stow: http://www.gnu.org/software/stow/. Stow can be used to install many concurrent versions at the same time, by using soft links (it works on any UNIX). You cannot use several versions at the same time, though. It has the extremely important advantage of being able to uninstall anything installed under it. I use it to test many different environments to build numpy/scipy (ATLAS, MKL, NETLIB, different compilers, versions, etc...). It can also be used to track numpy and scipy svn, while having a known, working version which can be re-enabled in a second. You should never install things in /usr (except /usr/local), that's one of the best way to break your system, BTW. cheers, David From david at ar.media.kyoto-u.ac.jp Mon May 19 08:04:02 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 19 May 2008 21:04:02 +0900 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <48301AD0.80700@gmail.com> References: <482DFDB3.9040909@gmail.com> <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> <482E0373.4060809@gmail.com> <3d375d730805161515i7c517bb2nf32fa0a2890197b3@mail.gmail.com> <482E11F6.9060401@gmail.com> <482E9612.3040009@ar.media.kyoto-u.ac.jp> <482F2039.7020403@gmail.com> <482FEFB8.2090206@slac.stanford.edu> <482FF0A6.2040201@ar.media.kyoto-u.ac.jp> <48300FD2.9090308@gmail.com> <48301009.6080204@ar.media.kyoto-u.ac.jp> <48301AD0.80700@gmail.com> Message-ID: <48316CB2.7020701@ar.media.kyoto-u.ac.jp> Xavier Gnata wrote: > Indeed, scipy is very easy to compile when the lib are installed. > > well atlas using the package is ok. > umfpack is not on my ubuntu. > Yes I'm able to compile scipy but it complains that umfpack (or > libsparesuite...) is missing when I run the scipy.test(). > > Maybe the scipy.test() should clearly split the errors in two groups : > One group including "real" bugs and one reporting that libs a missing > but that it is *not* a problem if you do not plan to use this part of scipy > Yes, it is a problem, I agree. There are too many dependencies in scipy, with too much magic to make some things work. It would be much better to cut the dependencies, and support fully a restricted set instead of supporting many things at 50 %. > I do agree that the doc should tell the user to use the .deb and I think > all this stuff be just vanish when gfortran will be the default compiler. > I do not share your optimism :) People will still have trouble with atlas, etc... > > Cheers, > Xavier > ps : I'm not sure but I don't think you can compile the kernel using icc > ;). the kernel is written in gcc qnd not in C :) > You would be surprised :) ftp://download.intel.com/support/performancetools/c/linux/sb/linuxkernelbuildwhitepaper.pdf Looks easier than building gcc with a non glibc is more difficult, though. From jr at sun.ac.za Mon May 19 10:33:25 2008 From: jr at sun.ac.za (Johann Rohwer) Date: Mon, 19 May 2008 16:33:25 +0200 Subject: [SciPy-user] Record array help Message-ID: <48318FB5.3030800@sun.ac.za> Hi, Not sure whether to ask here or on the matplotlib list, but since it's mainly a numpy/scipy issue I thought I'd try here first. Is there any extended documentation/tutorial on record arrays? The NumPy book is pretty cryptic about this and I'm still very new to the concept. I'm using the csv2rec function from matplotlib (pylab) to generate a record array from data that's in a CSV file. The CSV file basically has column labels in the first row and numerical data in all subsequent rows. The issues I'm struggling with are: 1. Is it possible to change the dtype of a field after the record array has been created? 2. The CSV file has missing data points - how do I turn these into python 'None' elements in the record array? (If I leave that element empty in the CSV file, then csv2rec complains about not being able to handle the import; if I put 'None' in the CSV file (without quotes), then the whole field including the 'None' and all the other float data is converted into a string dtype, rendering the numerical data useless). 3. Is it possible to obtain a subset of the original data (corresponding to two or more columns of the CSV file) as a conventional 2D numpy array, or can I access the data only individually by column (i.e. field in the record array)? Any pointers would be appreciated! Johann From stefan at sun.ac.za Mon May 19 10:55:35 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 19 May 2008 16:55:35 +0200 Subject: [SciPy-user] Record array help In-Reply-To: <48318FB5.3030800@sun.ac.za> References: <48318FB5.3030800@sun.ac.za> Message-ID: <9457e7c80805190755u2c84420fj56b17543b217ef36@mail.gmail.com> Hi Johann 2008/5/19 Johann Rohwer : > Is there any extended documentation/tutorial on record arrays? There is an introduction here: http://www.scipy.org/RecordArrays > 1. Is it possible to change the dtype of a field after the record array has > been created? It can be done, but often it is not very useful: In [3]: dt = np.dtype([('x',np.uint8),('y',np.uint8)]) In [4]: np.array([(1,2),(3,4)],dtype=dt) Out[4]: array([(1, 2), (3, 4)], dtype=[('x', '|u1'), ('y', '|u1')]) In [5]: _.view(np.uint16) Out[5]: array([ 513, 1027], dtype=uint16) I suspect what you want to do is to change one 'column' from, say, int to float, and reinterpret the data. For that, you'll need to make a copy. > 2. The CSV file has missing data points - how do I turn these into python > 'None' elements in the record array? (If I leave that element empty in the > CSV file, then csv2rec complains about not being able to handle the import; > if I put 'None' in the CSV file (without quotes), then the whole field > including the 'None' and all the other float data is converted into a string > dtype, rendering the numerical data useless). Maybe `numpy.loadtxt` could be of some use. > 3. Is it possible to obtain a subset of the original data (corresponding to > two or more columns of the CSV file) as a conventional 2D numpy array, or > can I access the data only individually by column (i.e. field in the record > array)? I hope someone comes up with an elegant solution, otherwise you can make a copy: numpy.array([data['field1'], data['field2']]).T Regards St?fan From pgmdevlist at gmail.com Mon May 19 11:13:57 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 19 May 2008 11:13:57 -0400 Subject: [SciPy-user] Record array help In-Reply-To: <48318FB5.3030800@sun.ac.za> References: <48318FB5.3030800@sun.ac.za> Message-ID: <200805191113.57887.pgmdevlist@gmail.com> > Not sure whether to ask here or on the matplotlib list, but since it's > mainly a numpy/scipy issue I thought I'd try here first. Johann, You'll find some basic information about record arrays on that link: http://www.scipy.org/RecordArrays > 1. Is it possible to change the dtype of a field after the record array has > been created? I'm afraid you can't. However, you can always create a new dtype afterwards, and allocate it to your record array. > 2. The CSV file has missing data points - how do I turn these into python > 'None' elements in the record array? You may want to try numpy.ma.mrecords, that gives the possibility to mask specific fields in a record array (instead of masking whole records). However, the module is still experimental, and some tweaking will be expected. > 3. Is it possible to obtain a subset of the original data (corresponding to > two or more columns of the CSV file) as a conventional 2D numpy array, or > can I access the data only individually by column (i.e. field in the record > array)? Yes, you can get a subset: >>>import numpy as np >>># Define some fields >>>a = np.arange(10,dtype=int) >>>b = np.arange(10,1,-1,dtype=int) >>>c = np.random.rand(10) >>>ndtype = [('a',int),('b',int),('c',float)] >>># Define your record array >>>mrec = np.array(zip(a,b,c), dtype=ndtype) >>># Get a subset #1: by selecting fields >>>subset_1 = np.column_stack([mrec['a'],mrec['b']]) >>># Get a subset #2: by changing the view >>>subset_2 = mrec.view((int,3))[:,2] Method #2 is quite useful if your fields have the same dtype: that way, you can switch from records/fields to lines/columns seamlessly. From bsouthey at gmail.com Mon May 19 11:20:36 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 19 May 2008 10:20:36 -0500 Subject: [SciPy-user] Record array help In-Reply-To: <9457e7c80805190755u2c84420fj56b17543b217ef36@mail.gmail.com> References: <48318FB5.3030800@sun.ac.za> <9457e7c80805190755u2c84420fj56b17543b217ef36@mail.gmail.com> Message-ID: <48319AC4.5010005@gmail.com> St?fan van der Walt wrote: > Hi Johann > > 2008/5/19 Johann Rohwer : > >> Is there any extended documentation/tutorial on record arrays? >> > > There is an introduction here: > > http://www.scipy.org/RecordArrays > > >> 1. Is it possible to change the dtype of a field after the record array has >> been created? >> > > It can be done, but often it is not very useful: > > In [3]: dt = np.dtype([('x',np.uint8),('y',np.uint8)]) > > In [4]: np.array([(1,2),(3,4)],dtype=dt) > Out[4]: > array([(1, 2), (3, 4)], > dtype=[('x', '|u1'), ('y', '|u1')]) > > In [5]: _.view(np.uint16) > Out[5]: array([ 513, 1027], dtype=uint16) > > I suspect what you want to do is to change one 'column' from, say, int > to float, and reinterpret the data. For that, you'll need to make a > copy. > > >> 2. The CSV file has missing data points - how do I turn these into python >> 'None' elements in the record array? (If I leave that element empty in the >> CSV file, then csv2rec complains about not being able to handle the import; >> if I put 'None' in the CSV file (without quotes), then the whole field >> including the 'None' and all the other float data is converted into a string >> dtype, rendering the numerical data useless). >> > > Maybe `numpy.loadtxt` could be of some use. > > >> 3. Is it possible to obtain a subset of the original data (corresponding to >> two or more columns of the CSV file) as a conventional 2D numpy array, or >> can I access the data only individually by column (i.e. field in the record >> array)? >> > > I hope someone comes up with an elegant solution, otherwise you can make a copy: > > numpy.array([data['field1'], data['field2']]).T > > Regards > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > Hi, You might also want to check out Andrew Straw's DataFrame class: http://www.scipy.org/Cookbook/DataFrame However, with missing values you probably should investigate using Masked Arrays. You should be able to modify the DataFrame code to handle this. Regards Bruce From strawman at astraw.com Mon May 19 11:30:39 2008 From: strawman at astraw.com (Andrew Straw) Date: Mon, 19 May 2008 08:30:39 -0700 Subject: [SciPy-user] Record array help In-Reply-To: <48319AC4.5010005@gmail.com> References: <48318FB5.3030800@sun.ac.za> <9457e7c80805190755u2c84420fj56b17543b217ef36@mail.gmail.com> <48319AC4.5010005@gmail.com> Message-ID: <48319D1F.7080500@astraw.com> Bruce Southey wrote: > Hi, > You might also want to check out Andrew Straw's DataFrame class: > http://www.scipy.org/Cookbook/DataFrame I should note that the DataFrame idea came from the time before record arrays. I now use csv2rec. Record arrays are much more flexible than the DataFrame class. -Andrew From bsouthey at gmail.com Mon May 19 11:55:45 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 19 May 2008 10:55:45 -0500 Subject: [SciPy-user] Record array help In-Reply-To: <48319D1F.7080500@astraw.com> References: <48318FB5.3030800@sun.ac.za> <9457e7c80805190755u2c84420fj56b17543b217ef36@mail.gmail.com> <48319AC4.5010005@gmail.com> <48319D1F.7080500@astraw.com> Message-ID: <4831A301.6030202@gmail.com> Andrew Straw wrote: > Bruce Southey wrote: > >> Hi, >> You might also want to check out Andrew Straw's DataFrame class: >> http://www.scipy.org/Cookbook/DataFrame >> > I should note that the DataFrame idea came from the time before record > arrays. I now use csv2rec. Record arrays are much more flexible than the > DataFrame class. > > -Andrew > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > Hi, Just for reference, you can get csv2rec as part of matplotlib 0.91.2 http://matplotlib.sourceforge.net/ Bruce From Karl.Young at ucsf.edu Mon May 19 12:49:25 2008 From: Karl.Young at ucsf.edu (Karl Young) Date: Mon, 19 May 2008 09:49:25 -0700 Subject: [SciPy-user] 3D bar charts In-Reply-To: References: Message-ID: <4831AF95.6050607@ucsf.edu> It would be nice if there were something in SciPy with that capability (not complaining; I should add it if I want it that bad !) and though I haven't seen what's in the bleeding edge of the repository I am unaware of anything like that. When I required something like that in the past I've used dislin (http://www.mps.mpg.de/dislin/). >Hi there, > >is there a python tool to plot 3D bar charts ? > >Any pointer would be appreciated. > >Thanks in advance > > Nils >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From matteo at naufraghi.net Mon May 19 14:11:21 2008 From: matteo at naufraghi.net (Matteo Bertini) Date: Mon, 19 May 2008 20:11:21 +0200 Subject: [SciPy-user] matrix multipy ... Message-ID: Perhaps it's simple, but... Suppose I have 2 "vectors" (1xN matrix really) n [27]: a = mat("1,2,3,4,5") In [29]: b = mat("1,2,3") If I whant the product, no problem. But suppose I have a list of vectors (a proper matrix) In [37]: aa = mat("1,2,3,4,5;6,7,8,9,0") In [38]: bb = mat("1,2,3;4,5,6") Can I avoid a loop and have the resulting "cube"? In [40]: aa[0].T*bb[0] Out[40]: matrix([[ 1, 2, 3], [ 2, 4, 6], [ 3, 6, 9], [ 4, 8, 12], [ 5, 10, 15]]) I admit, I can't really understand array slicing power/limits! Thank you, Matteo Bertini From cohen at slac.stanford.edu Mon May 19 14:44:53 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Mon, 19 May 2008 20:44:53 +0200 Subject: [SciPy-user] matrix multipy ... In-Reply-To: References: Message-ID: <4831CAA5.9090200@slac.stanford.edu> hi Matteo, In [1]: import numpy as np In [2]: aa = np.mat("1,2,3,4,5;6,7,8,9,0") In [3]: bb = np.mat("1,2,3;4,5,6") In [4]: np.outer(aa.T,bb) Out[4]: array([[ 1, 2, 3, 4, 5, 6], [ 6, 12, 18, 24, 30, 36], [ 2, 4, 6, 8, 10, 12], [ 7, 14, 21, 28, 35, 42], [ 3, 6, 9, 12, 15, 18], [ 8, 16, 24, 32, 40, 48], [ 4, 8, 12, 16, 20, 24], [ 9, 18, 27, 36, 45, 54], [ 5, 10, 15, 20, 25, 30], [ 0, 0, 0, 0, 0, 0]]) In [5]: np.outer(aa,bb) Out[5]: array([[ 1, 2, 3, 4, 5, 6], [ 2, 4, 6, 8, 10, 12], [ 3, 6, 9, 12, 15, 18], [ 4, 8, 12, 16, 20, 24], [ 5, 10, 15, 20, 25, 30], [ 6, 12, 18, 24, 30, 36], [ 7, 14, 21, 28, 35, 42], [ 8, 16, 24, 32, 40, 48], [ 9, 18, 27, 36, 45, 54], [ 0, 0, 0, 0, 0, 0]]) Is that what you were looking for? cheers, Johann Matteo Bertini wrote: > Perhaps it's simple, but... > > Suppose I have 2 "vectors" (1xN matrix really) > > n [27]: a = mat("1,2,3,4,5") > In [29]: b = mat("1,2,3") > > If I whant the product, no problem. > > But suppose I have a list of vectors (a proper matrix) > > In [37]: aa = mat("1,2,3,4,5;6,7,8,9,0") > In [38]: bb = mat("1,2,3;4,5,6") > > Can I avoid a loop and have the resulting "cube"? > > In [40]: aa[0].T*bb[0] > Out[40]: > matrix([[ 1, 2, 3], > [ 2, 4, 6], > [ 3, 6, 9], > [ 4, 8, 12], > [ 5, 10, 15]]) > > I admit, I can't really understand array slicing power/limits! > > Thank you, > Matteo Bertini > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nmarais at sun.ac.za Mon May 19 15:05:50 2008 From: nmarais at sun.ac.za (Neilen Marais) Date: Mon, 19 May 2008 19:05:50 +0000 (UTC) Subject: [SciPy-user] Visibility of scikits from scipy.org website Message-ID: Hi, I've read with interest about some of the scikits development going on on the mailing lists When I wanted to find more info I first headed to www.scipy.org. It seems however that scikits aren't mentioned at all on www.scipy.org (zero search results for scikits). A bit of googling later I found the scikits trac website, but considering the close relationship between scipy and scikits, should it not at least get mentioned on www.scipy.org? Just thinking aloud Neilen From zunzun at zunzun.com Mon May 19 18:12:03 2008 From: zunzun at zunzun.com (James Phillips) Date: Mon, 19 May 2008 17:12:03 -0500 Subject: [SciPy-user] odrpack changing the shape of a matrix Message-ID: <268756d30805191512t6e114257kccc017f336d61fce@mail.gmail.com> The example code below demonstrates odrpack changing the shape of a matrix. I code around this behavior with a dummy 'second list' as shown below. James Phillips http://zunzun.com import numpy, scipy.odr.odrpack def f(B, x): if len(x.shape) == 2: print 'x.shape OK:', x.shape else: print 'x.shape ** changed ** --> ', x.shape return B[0] + x[0]*B[1] + x[0]*x[0]*x[0]*B[2] # constant + ax + bx^3 good_X_data = numpy.array([[1.0, 2.0, 3.0, 4.0, 5.0], [0, 0, 0, 0, 0]]) # second list is only needed for illustration bad_X_data = numpy.array([[1.0, 2.0, 3.0, 4.0, 5.0]]) # no second list, the function f() above shows shape change ydata = numpy.array([1.0, 2.0, 2.0, 2.0, 3.0]) coefficients = numpy.array([0.89655172413793, 0.33646812957158, 0.00208986415883]) model = scipy.odr.odrpack.Model(f) goodRealData = scipy.odr.odrpack.RealData(good_X_data, ydata) badRealData = scipy.odr.odrpack.RealData(bad_X_data, ydata) # independant data (X) unchanged myodr = scipy.odr.odrpack.ODR(goodRealData, model, beta0=coefficients, maxit=0) myodr.set_job(var_calc=0) myoutput = myodr.run() print 'Good Std Error:', myoutput.sd_beta, ' (should be [ 0.81412536 0.45377835 0.0142225 ])' # independant data (X) changes shape myodr = scipy.odr.odrpack.ODR(badRealData, model, beta0=coefficients, maxit=0) myodr.set_job(var_calc=0) myoutput = myodr.run() print 'Bad Std Error:', myoutput.sd_beta, ' (should be [ 0.81412536 0.45377835 0.0142225 ])' -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteo at naufraghi.net Mon May 19 19:15:48 2008 From: matteo at naufraghi.net (Matteo Bertini) Date: Tue, 20 May 2008 01:15:48 +0200 Subject: [SciPy-user] matrix multipy ... In-Reply-To: <4831CAA5.9090200@slac.stanford.edu> References: <4831CAA5.9090200@slac.stanford.edu> Message-ID: Johann Cohen-Tanugi ha scritto: > hi Matteo, > > In [1]: import numpy as np > > In [2]: aa = np.mat("1,2,3,4,5;6,7,8,9,0") > > In [3]: bb = np.mat("1,2,3;4,5,6") > > In [5]: np.outer(aa,bb) > Out[5]: > array([[ 1, 2, 3, 4, 5, 6], > [ 2, 4, 6, 8, 10, 12], > [ 3, 6, 9, 12, 15, 18], > [ 4, 8, 12, 16, 20, 24], > [ 5, 10, 15, 20, 25, 30], > [ 6, 12, 18, 24, 30, 36], > [ 7, 14, 21, 28, 35, 42], > [ 8, 16, 24, 32, 40, 48], > [ 9, 18, 27, 36, 45, 54], > [ 0, 0, 0, 0, 0, 0]]) > > Is that what you were looking for? > cheers, > Johann Not really, the loop I'd like to avoid is this: In [63]: cc = N.zeros((bb.shape[0], aa.shape[1], bb.shape[1])) In [64]: for i in range(cc.shape[0]): ....: cc[i] = aa[i].T*bb[i] In [66]: cc Out[66]: array([[[ 1., 2., 3.], [ 2., 4., 6.], [ 3., 6., 9.], [ 4., 8., 12.], [ 5., 10., 15.]], [[ 24., 30., 36.], [ 28., 35., 42.], [ 32., 40., 48.], [ 36., 45., 54.], [ 0., 0., 0.]]]) The outer product produces the same results in Q1 and Q3, but does useles (in this case :P) computation in Q2 and Q4. Dunno if the computation can be faster, but I'd like to find out if there is a slice trick to do this kind of things. Thank you, Matteo From david at ar.media.kyoto-u.ac.jp Mon May 19 21:23:41 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 20 May 2008 10:23:41 +0900 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <483013AB.3@slac.stanford.edu> References: <482DFDB3.9040909@gmail.com> <3d375d730805161437h28f1ba99tf66221a2d96362b3@mail.gmail.com> <482E0373.4060809@gmail.com> <3d375d730805161515i7c517bb2nf32fa0a2890197b3@mail.gmail.com> <482E11F6.9060401@gmail.com> <482E9612.3040009@ar.media.kyoto-u.ac.jp> <482F2039.7020403@gmail.com> <482FEFB8.2090206@slac.stanford.edu> <482FF0A6.2040201@ar.media.kyoto-u.ac.jp> <48300FD2.9090308@gmail.com> <48301009.6080204@ar.media.kyoto-u.ac.jp> <483013AB.3@slac.stanford.edu> Message-ID: <4832281D.40304@ar.media.kyoto-u.ac.jp> Johann Cohen-Tanugi wrote: > last time I checked the Fedora blas/lapack distributions were still > flawed.... Did that change for Fedora 9? > At least for FC 6, 7 and 8, I build blas/lapack which do work for numpy and scipy. As I mentioned before, I do not use rpm-based distributions, and it would be great to have someone who uses them and care about numpy/scipy to take over. I can't remember when/where we discuss it, but it would be good to have someone from fedora to take over this. cheers, David From peridot.faceted at gmail.com Tue May 20 00:59:47 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 20 May 2008 00:59:47 -0400 Subject: [SciPy-user] matrix multipy ... In-Reply-To: References: <4831CAA5.9090200@slac.stanford.edu> Message-ID: 2008/5/19 Matteo Bertini : > Not really, the loop I'd like to avoid is this: > > In [63]: cc = N.zeros((bb.shape[0], aa.shape[1], bb.shape[1])) > > In [64]: for i in range(cc.shape[0]): > ....: cc[i] = aa[i].T*bb[i] > The outer product produces the same results in Q1 and Q3, but does > useles (in this case :P) computation in Q2 and Q4. > > Dunno if the computation can be faster, but I'd like to find out if > there is a slice trick to do this kind of things. In general, I don't think there is an efficient way to do "element-wise matrix multiplication". It's frustrating, because that's something one often wants to do. The inefficient way to do it looks like: In [2]: A = np.random.randn(3,4) In [3]: B = np.random.randn(4,5) In [4]: np.dot(A,B) Out[4]: array([[-2.16160901, 1.37419626, 1.69220464, -1.92437171, -2.81084305], [-1.11186059, -3.22519623, -0.13694365, 0.58809612, 1.14769438], [ 3.50699881, 3.14051249, -1.36108516, 0.41971033, -1.38753093]]) In [5]: np.sum(A[:,:,np.newaxis]*B[np.newaxis,:,:],axis=1) Out[5]: array([[-2.16160901, 1.37419626, 1.69220464, -1.92437171, -2.81084305], [-1.11186059, -3.22519623, -0.13694365, 0.58809612, 1.14769438], [ 3.50699881, 3.14051249, -1.36108516, 0.41971033, -1.38753093]]) This is only one matrix, but you can add in any number of additional axes at the beginning. The big problem with this is that because the sum is done separately, there is a temporary of size N*M*R, which is very much larger than either of the original two matrices. In your case, things are a bit simpler because you're doing vector products: In [7]: A = np.random.randn(3,4,5) In [8]: B = np.random.randn(3,4,5) In [12]: np.sum(A*B,axis=-1).shape Out[12]: (3, 4) Here the temporary is no larger than the two input arrays, so it's not as big a problem. Anne From millman at berkeley.edu Tue May 20 06:55:17 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 20 May 2008 03:55:17 -0700 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: <4832281D.40304@ar.media.kyoto-u.ac.jp> References: <482DFDB3.9040909@gmail.com> <482E11F6.9060401@gmail.com> <482E9612.3040009@ar.media.kyoto-u.ac.jp> <482F2039.7020403@gmail.com> <482FEFB8.2090206@slac.stanford.edu> <482FF0A6.2040201@ar.media.kyoto-u.ac.jp> <48300FD2.9090308@gmail.com> <48301009.6080204@ar.media.kyoto-u.ac.jp> <483013AB.3@slac.stanford.edu> <4832281D.40304@ar.media.kyoto-u.ac.jp> Message-ID: On Mon, May 19, 2008 at 6:23 PM, David Cournapeau wrote: > I can't remember when/where we discuss it, but it would be good to have > someone from fedora to take over this. I going to be the official fedora maintainer of numpy/scipy starting fairly soon. I have the process temporarily on hold, while I try to get NumPy 1.1.0 released, open abstract submissions for the SciPy conference, etc.... Once I take finally finish taking this over I will keep the list posted. You (the numpy/scipy community) will also have someone to complain to when the official fedora rpms aren't working correctly. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From cohen at slac.stanford.edu Tue May 20 07:32:19 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Tue, 20 May 2008 13:32:19 +0200 Subject: [SciPy-user] scipy 0.7.0.dev4373 + atlas FAILED (failures=2, errors=12) In-Reply-To: References: <482DFDB3.9040909@gmail.com> <482E11F6.9060401@gmail.com> <482E9612.3040009@ar.media.kyoto-u.ac.jp> <482F2039.7020403@gmail.com> <482FEFB8.2090206@slac.stanford.edu> <482FF0A6.2040201@ar.media.kyoto-u.ac.jp> <48300FD2.9090308@gmail.com> <48301009.6080204@ar.media.kyoto-u.ac.jp> <483013AB.3@slac.stanford.edu> <4832281D.40304@ar.media.kyoto-u.ac.jp> Message-ID: <4832B6C3.6070404@slac.stanford.edu> hi Jarrod, great news! Thanks. Let me know if you need guinea pigs, though I am on F8 and do not plan to get F9 in a foreseeable future, unless I dare try the new anaconda upgrade wrapper. best, Johann Jarrod Millman wrote: > On Mon, May 19, 2008 at 6:23 PM, David Cournapeau > wrote: > >> I can't remember when/where we discuss it, but it would be good to have >> someone from fedora to take over this. >> > > I going to be the official fedora maintainer of numpy/scipy starting > fairly soon. I have the process temporarily on hold, while I try to > get NumPy 1.1.0 released, open abstract submissions for the SciPy > conference, etc.... > > Once I take finally finish taking this over I will keep the list > posted. You (the numpy/scipy community) will also have someone to > complain to when the official fedora rpms aren't working correctly. > > From david.huard at gmail.com Tue May 20 09:06:51 2008 From: david.huard at gmail.com (David Huard) Date: Tue, 20 May 2008 09:06:51 -0400 Subject: [SciPy-user] Visibility of scikits from scipy.org website In-Reply-To: References: Message-ID: <91cf711d0805200606o649f57c5mf378c73c8a89e5a7@mail.gmail.com> Hi Neilen, Please go ahead and write something up. Thanks, David 2008/5/19 Neilen Marais : > Hi, > > I've read with interest about some of the scikits development going on on > the mailing lists When I wanted to find more info I first headed to > www.scipy.org. It seems however that scikits aren't mentioned at all on > www.scipy.org (zero search results for scikits). A bit of googling later > I found the scikits trac website, but considering the close relationship > between scipy and scikits, should it not at least get mentioned on > www.scipy.org? > > Just thinking aloud > Neilen > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhearne at usgs.gov Tue May 20 10:03:36 2008 From: mhearne at usgs.gov (Michael Hearne) Date: Tue, 20 May 2008 08:03:36 -0600 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices Message-ID: All: I'm porting some code from Matlab and I ran across a subtle difference between it and Numpy. In Matlab: a = [ 11 12 13 14 ; 21 22 23 24 ; 31 32 33 34 ]; a([1 3],[1 4]) gives you: 11 14 31 34 In Python: a = array([[ 11, 12, 13, 14 ], [ 21, 22, 23, 24 ], [ 31, 32, 33, 34 ]]) a[[0,2],[0,3]] gives you: array([11, 34]) I tried to find an example of how to do this, and the normally very helpful page at http://mathesaurus.sourceforge.net/matlab-numpy.html has this: a.take([0,2]).take([0,3], axis=1) which gives me: ValueError: axis(=1) out of bounds So, I have two questions: 1) How do I actually get the equivalent behavior in Python? 2) Does anyone know the best way to contact Vidar Gundersen, the author of the above page to let him know about these errors? It really is a useful resource, and I'd hate to have it continue to live with errors in it... --Mike ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ From nmarais at sun.ac.za Tue May 20 11:06:09 2008 From: nmarais at sun.ac.za (Neilen Marais) Date: Tue, 20 May 2008 15:06:09 +0000 (UTC) Subject: [SciPy-user] extracting elements of a matrix using arrays as indices References: Message-ID: Hi Mike On Tue, 20 May 2008 08:03:36 -0600, Michael Hearne wrote: > In Python: > a = array([[ 11, 12, 13, 14 ], > [ 21, 22, 23, 24 ], > [ 31, 32, 33, 34 ]]) > > a[[0,2],[0,3]] > > gives you: > > array([11, 34]) I think you're looking for a[ix_([0,2], [0,3])] ix_ == numpy.ix_ Regards Neilen From mhearne at usgs.gov Tue May 20 16:43:28 2008 From: mhearne at usgs.gov (Michael Hearne) Date: Tue, 20 May 2008 14:43:28 -0600 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices In-Reply-To: References: Message-ID: <1E04AF86-C790-405C-81E5-7F5CCFD0C2A8@usgs.gov> Neilen - Thanks. Unfortunately, I've either discovered a bug or have failed to understand how to use this function. The below code snippet: from pylab import * import numpy print numpy.__version__, numpy.__file__ data = rand(648,690) i,j = (data < 0.14).nonzero() data[ix_(i,j)] = data[ix_(i,j)]*0 print 'No crash.' returns: 1.1.0.dev5077 /Library/Python/2.5/site-packages/numpy-1.1.0.dev5077- py2.5-macosx-10.3-i386.egg/numpy/__init__.pyc Segmentation fault I'm using the numpy that came with the SciPy SuperPack, created on or before April 30, on Mac OS X 10.5.2. If this is my fault, can someone point out the flaw in my code? If this is a bug, I'll be happy to submit a bug in some sort of tracking system, and provide whatever information is desired by a developer. Thanks, Mike On May 20, 2008, at 9:06 AM, Neilen Marais wrote: > Hi Mike > > On Tue, 20 May 2008 08:03:36 -0600, Michael Hearne wrote: > >> In Python: >> a = array([[ 11, 12, 13, 14 ], >> [ 21, 22, 23, 24 ], >> [ 31, 32, 33, 34 ]]) >> >> a[[0,2],[0,3]] >> >> gives you: >> >> array([11, 34]) > > I think you're looking for > > a[ix_([0,2], [0,3])] > > ix_ == numpy.ix_ > > Regards > Neilen > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From silva at lma.cnrs-mrs.fr Tue May 20 17:51:00 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Tue, 20 May 2008 23:51:00 +0200 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices In-Reply-To: References: Message-ID: <1211320260.2760.2.camel@localhost.localdomain> Le mardi 20 mai 2008 ? 08:03 -0600, Michael Hearne a ?crit : > In Python: > a = array([[ 11, 12, 13, 14 ], > [ 21, 22, 23, 24 ], > [ 31, 32, 33, 34 ]]) > a[[0,2],[0,3]] > gives you: > array([11, 34]) > So, I have two questions: > 1) How do I actually get the equivalent behavior in Python? >>> from numpy import array >>> a = array([[ 11, 12, 13, 14 ], ... [ 21, 22, 23, 24 ], ... [ 31, 32, 33, 34 ]]) >>> a[:,[0,3]][[0,2],:] array([[11, 14], [31, 34]]) >>> a[[0,2],[0,3]] array([11, 34]) -- Fabrice Silva LMA UPR CNRS 7051 - ?quipe S2M From alan.mcintyre at gmail.com Tue May 20 19:41:42 2008 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Tue, 20 May 2008 19:41:42 -0400 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices In-Reply-To: References: Message-ID: <1d36917a0805201641w1a67167bj5c3a9428ec4942ba@mail.gmail.com> On Tue, May 20, 2008 at 10:03 AM, Michael Hearne wrote: > I tried to find an example of how to do this, and the normally very > helpful page at http://mathesaurus.sourceforge.net/matlab-numpy.html > has this: > > a.take([0,2]).take([0,3], axis=1) Does anybody know if this is something that would have worked properly in an older version of Numeric/NumPy? From zhangchipr at gmail.com Tue May 20 22:53:26 2008 From: zhangchipr at gmail.com (zhang chi) Date: Wed, 21 May 2008 10:53:26 +0800 Subject: [SciPy-user] Is there a bicubic interpolation function in scipy? Message-ID: <90c482ab0805201953o3a17fdb1n3ae08c3cb92e1bef@mail.gmail.com> hi I want to use a bicubic interpolation function to process a image, is there a a bicubic interpolation function in scipy? thank you very much. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Wed May 21 03:54:25 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 21 May 2008 09:54:25 +0200 Subject: [SciPy-user] Is there a bicubic interpolation function in scipy? In-Reply-To: <90c482ab0805201953o3a17fdb1n3ae08c3cb92e1bef@mail.gmail.com> References: <90c482ab0805201953o3a17fdb1n3ae08c3cb92e1bef@mail.gmail.com> Message-ID: <9457e7c80805210054u772281d6k978d54b9e514a6ea@mail.gmail.com> 2008/5/21 zhang chi : > I want to use a bicubic interpolation function to process a image, is > there a a bicubic interpolation function in scipy? There are a couple of options available. In ndimage, you can specify an `order` parameter to most functions, which determines the order of the splines used for interpolation, e.g. x = np.array([1,2,3.]) ndimage.zoom(x,5/3.,order=1) In `scipy.interpolate`, you also have `interp2d` and SmoothBivariateSpline (with kx=1,ky=1). Furthermore, if you need C code I can provide you with either ctypes or cython examples. Regards St?fan From jr at sun.ac.za Wed May 21 04:24:42 2008 From: jr at sun.ac.za (Johann Rohwer) Date: Wed, 21 May 2008 10:24:42 +0200 Subject: [SciPy-user] splrep/splev versus interp1d Message-ID: <4833DC4A.6000708@sun.ac.za> Hi, What is the difference (in terms of the underlying algorithms) between splines generated by sp.interpolate.interp1d and sp.interpolate.splrep/splev? As an example, when I do a simple spline interpolation of a sparsely sampled sin function, the two methods give me close but not identical results (especially close to the bounds of the x data the interpolated values differ). Code: import scipy as sp sp.pkgload('interpolate') x=sp.linspace(0,10,11) y=sp.sin(x) x2=sp.linspace(0,10,201) tck=sp.interpolate.splrep(x,y,k=3) y2=sp.interpolate.splev(x2,tck) f=sp.interpolate.interp1d(x,y,kind=3) y3=f(x2) y2 and y3 differ! Regards Johann From stefan at sun.ac.za Wed May 21 07:53:29 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 21 May 2008 13:53:29 +0200 Subject: [SciPy-user] splrep/splev versus interp1d In-Reply-To: <4833DC4A.6000708@sun.ac.za> References: <4833DC4A.6000708@sun.ac.za> Message-ID: <9457e7c80805210453s2f86ad6bw54d61b28e090a62d@mail.gmail.com> Hi Johann If I interpolate on points including the originals, i.e. x2 = sp.linspace(0,10,200,endpoint=False) then I see that, for both methods, they at least go through those points: n [147]: for nr,i in enumerate(x): print y[nr] - y2[x2 == i] .....: .....: [ 3.01232352e-18] [ -1.11022302e-16] [ -1.11022302e-16] [ 2.77555756e-17] [ 0.] [ -3.33066907e-16] [ 0.] [ 0.] [ 1.11022302e-16] [ 2.22044605e-16] [] In [149]: for nr,i in enumerate(x): print y[nr] - y3[x2 == i] .....: .....: [ -7.49400542e-16] [ 6.66133815e-16] [ 6.66133815e-16] [ -1.38777878e-16] [ 3.33066907e-16] [ 8.88178420e-16] [ -1.11022302e-16] [ 4.44089210e-16] [ 2.22044605e-16] [ -3.88578059e-16] [] Maybe the two algorithms use different smoothness measures? I'm interested to know why this happens, but I don't have more time to look at it right now, unfortunately. Thanks St?fan 2008/5/21 Johann Rohwer : > Hi, > > What is the difference (in terms of the underlying algorithms) between > splines generated by sp.interpolate.interp1d and sp.interpolate.splrep/splev? > > As an example, when I do a simple spline interpolation of a sparsely sampled > sin function, the two methods give me close but not identical results > (especially close to the bounds of the x data the interpolated values differ). > > Code: > import scipy as sp > sp.pkgload('interpolate') > x=sp.linspace(0,10,11) > y=sp.sin(x) > x2=sp.linspace(0,10,201) > tck=sp.interpolate.splrep(x,y,k=3) > y2=sp.interpolate.splev(x2,tck) > f=sp.interpolate.interp1d(x,y,kind=3) > y3=f(x2) > > y2 and y3 differ! > > Regards > Johann > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From johannes.stromberg at gmail.com Wed May 21 09:49:24 2008 From: johannes.stromberg at gmail.com (=?ISO-8859-1?Q?Johannes_Str=F6mberg?=) Date: Wed, 21 May 2008 15:49:24 +0200 Subject: [SciPy-user] PIL and gaussian_filter? Message-ID: Hi, I am new to using SciPy and I want to use it to apply gaussian smoothing/blur to images I get from PIL (Python Imaging Library). When I use the asarray() method on my PIL image I get a 3-dimensional array, shape is (w, h, 3 [rgb-values]). I am wondering how I can transform this to something that is compatible with f.e. ndimage.gaussian_filter? (When I run it directly it simply removes color from the image). Regards, Johannes From mhearne at usgs.gov Wed May 21 09:51:25 2008 From: mhearne at usgs.gov (Michael Hearne) Date: Wed, 21 May 2008 07:51:25 -0600 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices In-Reply-To: <1211320260.2760.2.camel@localhost.localdomain> References: <1211320260.2760.2.camel@localhost.localdomain> Message-ID: If I try that on a larger example, similar to the second one I made yesterday: nrows = 648 ncols = 690 data = rand(nrows,ncols) i,j = (data < 0.14).nonzero() data[i,:][:,j] = data[i,:][:,j]*0 I get another segmentation fault. I realize that the sense of the arrays is backwards from Fabrice's example, but doing it the other way: data[:,i][j,:] gives me an index out of range error. --Mike On May 20, 2008, at 3:51 PM, Fabrice Silva wrote: > Le mardi 20 mai 2008 ? 08:03 -0600, Michael Hearne a ?crit : >> In Python: >> a = array([[ 11, 12, 13, 14 ], >> [ 21, 22, 23, 24 ], >> [ 31, 32, 33, 34 ]]) >> a[[0,2],[0,3]] >> gives you: >> array([11, 34]) > >> So, I have two questions: >> 1) How do I actually get the equivalent behavior in Python? > >>>> from numpy import array >>>> a = array([[ 11, 12, 13, 14 ], > ... [ 21, 22, 23, 24 ], > ... [ 31, 32, 33, 34 ]]) > >>>> a[:,[0,3]][[0,2],:] > array([[11, 14], > [31, 34]]) > >>>> a[[0,2],[0,3]] > array([11, 34]) > > -- > Fabrice Silva > LMA UPR CNRS 7051 - ?quipe S2M > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From robince at gmail.com Wed May 21 10:06:52 2008 From: robince at gmail.com (Robin) Date: Wed, 21 May 2008 15:06:52 +0100 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices In-Reply-To: <1E04AF86-C790-405C-81E5-7F5CCFD0C2A8@usgs.gov> References: <1E04AF86-C790-405C-81E5-7F5CCFD0C2A8@usgs.gov> Message-ID: On Tue, May 20, 2008 at 9:43 PM, Michael Hearne wrote: > Neilen - Thanks. Unfortunately, I've either discovered a bug or have failed > to understand how to use this function. The below code snippet: > from pylab import * > import numpy > print numpy.__version__, numpy.__file__ > data = rand(648,690) > i,j = (data < 0.14).nonzero() > data[ix_(i,j)] = data[ix_(i,j)]*0 > print 'No crash.' If you're just trying to zero out values less than 0.14 (that's what it looks like to me) you could try this: data = rand(648,690) data[data<0.14] = 0 which should be quicker and not crash! I tried data[ix_(i,j)] = 0 (not sure why you need to multiply itself by zero - you can just assign the value 0 directly) but it appears to be very slow so if you can use the boolean indexing it will probably be better. Cheers Robin From alan.mcintyre at gmail.com Wed May 21 10:10:19 2008 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Wed, 21 May 2008 10:10:19 -0400 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices In-Reply-To: References: <1211320260.2760.2.camel@localhost.localdomain> Message-ID: <1d36917a0805210710t19d160dep6d67a39015fa9434@mail.gmail.com> On Wed, May 21, 2008 at 9:51 AM, Michael Hearne wrote: > If I try that on a larger example, similar to the second one I made > yesterday: > nrows = 648 > ncols = 690 > data = rand(nrows,ncols) > i,j = (data < 0.14).nonzero() > data[i,:][:,j] = data[i,:][:,j]*0 > I get another segmentation fault. This works for me (assuming you're trying to set elements in data that are less than 0.14 to zero): nrows = 648 ncols = 690 data = rand(nrows,ncols) z = (data < 0.14).nonzero() data[z] = 0 From amcmorl at gmail.com Wed May 21 10:12:52 2008 From: amcmorl at gmail.com (Angus McMorland) Date: Wed, 21 May 2008 10:12:52 -0400 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: References: Message-ID: 2008/5/21 Johannes Str?mberg : > Hi, > > I am new to using SciPy and I want to use it to apply gaussian > smoothing/blur to images I get from PIL (Python Imaging Library). > > When I use the asarray() method on my PIL image I get a 3-dimensional > array, shape is (w, h, 3 [rgb-values]). > > I am wondering how I can transform this to something that is > compatible with f.e. ndimage.gaussian_filter? (When I run it directly > it simply removes color from the image). I believe this is because you are also blurring across the last dimension (i.e. smoothing the colour information for each pixel), resulting in something much closer to grey in each case. To prevent this you need to blur only in the first two (spatial) dimensions of the image: Here's a one liner that should do what you want: ar2 = np.concatenate([scipy.ndimage.gaussian_filter(ar[...,x,np.newaxis], np.std(ar)/k) for x in xrange(ar.shape[2])], axis=2) where k determines the level of filtering. It uses a list comprehension over the colour dimension, and there's probably a better way to do this using np.apply_along_axis or apply_over_axes, but I can't immediately work it out. Anyone more familiar with these want to chip that in? HTH. Regards, Angus. -- AJC McMorland, PhD candidate Physiology, University of Auckland (Nearly) post-doctoral research fellow Neurobiology, University of Pittsburgh From alan.mcintyre at gmail.com Wed May 21 10:17:04 2008 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Wed, 21 May 2008 10:17:04 -0400 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices In-Reply-To: References: <1E04AF86-C790-405C-81E5-7F5CCFD0C2A8@usgs.gov> Message-ID: <1d36917a0805210717r3c15649fy66b47a404eb69435@mail.gmail.com> On Wed, May 21, 2008 at 10:06 AM, Robin wrote: > data = rand(648,690) > data[data<0.14] = 0 Ah, that's clearer than my example. :) > I tried data[ix_(i,j)] = 0 (not sure why you need to multiply itself > by zero - you can just assign the value 0 directly) but it appears to > be very slow so if you can use the boolean indexing it will probably > be better. I didn't have time to figure out why, but using ix_ like that was enormously slow and used a *lot* of memory. It brought my system to a crawl (until I killed it) with a 300x300 matrix, while the boolean indexing method is very fast with 3000x3000. From zachary.pincus at yale.edu Wed May 21 10:26:58 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 21 May 2008 16:26:58 +0200 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: References: Message-ID: Hello, > When I use the asarray() method on my PIL image I get a 3-dimensional > array, shape is (w, h, 3 [rgb-values]). > > I am wondering how I can transform this to something that is > compatible with f.e. ndimage.gaussian_filter? (When I run it directly > it simply removes color from the image). ndimage works as the name implies: it is a library for dealing with n- dimensional images. So in this case, it treated your input as a [w,h, 3] array of voxels (a 3D image), and then applied the gaussian smoothing of the requested variance across all three dimensions. The result of which was probably (a) some degree of smoothing in the x and y dimensions (as requested), and (b) the same degree of smoothing across the "z" (color channel) dimension, which (as it is only "3 voxels high" translates into quite a bit of mixing of the pixels). This "z smoothing" of course translates into mixing the color channels, probably by quite a large degree if your gaussian variance was anything above one pixel. Which would give the effect of "removing the color" from the image. So, you need to apply the gaussian filter to each channel independently. You could either do this with a for loop, or even easier, pass a tuple to the sigma parameter to describe the requested variance in each dimension. To smooth by a 2-pixel stdev gaussian in w, a 4-pixel gaussian in h, and to do no smoothing across color channels, just pass 'sigma=[2,4,0]' to gaussian_filter. Zach From mhearne at usgs.gov Wed May 21 10:35:47 2008 From: mhearne at usgs.gov (Michael Hearne) Date: Wed, 21 May 2008 08:35:47 -0600 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices (SEGFAULT!) In-Reply-To: <1d36917a0805210710t19d160dep6d67a39015fa9434@mail.gmail.com> References: <1211320260.2760.2.camel@localhost.localdomain> <1d36917a0805210710t19d160dep6d67a39015fa9434@mail.gmail.com> Message-ID: <49B2E50A-1C2D-4424-8609-4984F0ED6C78@usgs.gov> Thanks for the help - the method below works just fine in my real application (which has nothing to do with setting elements of an array to zero - that was just a simple example of something to _do_ with the data being indexed). However, I am still concerned about the larger problem of getting a segfault using ANY method of indexing an array. If it is user error that is causing the problem, then shouldn't I get an exception that tells me my syntax is somehow incorrect? I've added SEGFAULT to the subject line in hopes that someone responsible for the core NumPy code (Travis O., perhaps?) will take notice and address the issue. If that happens, once again, I'm happy to help test on my version of NumPy wherever needed. Thanks for all the suggestions, Mike On May 21, 2008, at 8:10 AM, Alan McIntyre wrote: > On Wed, May 21, 2008 at 9:51 AM, Michael Hearne > wrote: >> If I try that on a larger example, similar to the second one I made >> yesterday: >> nrows = 648 >> ncols = 690 >> data = rand(nrows,ncols) >> i,j = (data < 0.14).nonzero() >> data[i,:][:,j] = data[i,:][:,j]*0 >> I get another segmentation fault. > > This works for me (assuming you're trying to set elements in data that > are less than 0.14 to zero): > > nrows = 648 > ncols = 690 > data = rand(nrows,ncols) > z = (data < 0.14).nonzero() > data[z] = 0 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From elr1979 at gmail.com Wed May 21 11:21:00 2008 From: elr1979 at gmail.com (Eduardo Rodrigues) Date: Wed, 21 May 2008 12:21:00 -0300 Subject: [SciPy-user] optimize.fsolve References: <90c482ab0805160603o3543aed8g91132da15aead5ed@mail.gmail.com> Message-ID: <005d01c8bb56$48715e70$a300a8c0@rodrigues> When I use optimize.fsolve(...) on windows, python interpreter crashes without any error msg. I am using pywin32 build 210, scipy version 0.6.0, windows xp. Regards, Eduardo. From stefan at sun.ac.za Wed May 21 11:29:23 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 21 May 2008 17:29:23 +0200 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices (SEGFAULT!) In-Reply-To: <49B2E50A-1C2D-4424-8609-4984F0ED6C78@usgs.gov> References: <1211320260.2760.2.camel@localhost.localdomain> <1d36917a0805210710t19d160dep6d67a39015fa9434@mail.gmail.com> <49B2E50A-1C2D-4424-8609-4984F0ED6C78@usgs.gov> Message-ID: <9457e7c80805210829l29dbb9a4gc4afa70e45a7d129@mail.gmail.com> Hi Mike 2008/5/21 Michael Hearne : > If I try that on a larger example, similar to the second one I made > > yesterday: > > nrows = 648 > > ncols = 690 > > data = rand(nrows,ncols) > > i,j = (data < 0.14).nonzero() > > data[i,:][:,j] = data[i,:][:,j]*0 > > I get another segmentation fault. I can confirm that this bug exists in r5120. Regards St?fan From alan.mcintyre at gmail.com Wed May 21 11:31:34 2008 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Wed, 21 May 2008 11:31:34 -0400 Subject: [SciPy-user] optimize.fsolve In-Reply-To: <005d01c8bb56$48715e70$a300a8c0@rodrigues> References: <90c482ab0805160603o3543aed8g91132da15aead5ed@mail.gmail.com> <005d01c8bb56$48715e70$a300a8c0@rodrigues> Message-ID: <1d36917a0805210831g35f44ffbha33481efc82a4dc6@mail.gmail.com> On Wed, May 21, 2008 at 11:21 AM, Eduardo Rodrigues wrote: > When I use optimize.fsolve(...) on windows, python interpreter crashes > without any error msg. > I am using pywin32 build 210, scipy version 0.6.0, windows xp. Eduardo, Can you provide some sample code that produces the crash? Thanks, Alan From oliphant at enthought.com Wed May 21 12:23:21 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 21 May 2008 11:23:21 -0500 Subject: [SciPy-user] splrep/splev versus interp1d In-Reply-To: <4833DC4A.6000708@sun.ac.za> References: <4833DC4A.6000708@sun.ac.za> Message-ID: <48344C79.7020809@enthought.com> Johann Rohwer wrote: > Hi, > > What is the difference (in terms of the underlying algorithms) between > splines generated by sp.interpolate.interp1d and sp.interpolate.splrep/splev? > The former does not use any "smoothing" while the latter does use "smoothing". Interpolation using splines for order greater than 1 actually requires additional constraints to be made as there are more degrees of freedom left after specifying continuity up to the (k-1)st derivative (for order k). The splrep functions use a smoothing constraint which I find useless unless you are interpolating noisy "data" (you can set the smoothing constraint to 0 and then I'm not sure what additional constraint it is using to define a unique spline). The interp1d function which uses low-level spline interpolation functions by default uses a constraint that enforces "minimial" discontinuity in the kth-derivative. So, yes, they will return non-identical results and this is to be expected. Interpolations for k>1 depends on the additional assumptions you add and there is a large number of possibilities. I'd like to flesh this out a bit better so it is clear what additional constraints are available. -Travis From elr1979 at gmail.com Wed May 21 12:26:56 2008 From: elr1979 at gmail.com (Eduardo Rodrigues) Date: Wed, 21 May 2008 13:26:56 -0300 Subject: [SciPy-user] optimize.fsolve References: <90c482ab0805160603o3543aed8g91132da15aead5ed@mail.gmail.com><005d01c8bb56$48715e70$a300a8c0@rodrigues> <1d36917a0805210831g35f44ffbha33481efc82a4dc6@mail.gmail.com> Message-ID: <006801c8bb5f$7ec18c80$a300a8c0@rodrigues> For example: >>> from scipy import * >>> def Eq1(x): ... x**2+2.0*x-3 ... >>> optimize.fsolve(Eq1,0) Regards. ----- Original Message ----- From: "Alan McIntyre" To: "SciPy Users List" Sent: Wednesday, May 21, 2008 12:31 PM Subject: Re: [SciPy-user] optimize.fsolve > On Wed, May 21, 2008 at 11:21 AM, Eduardo Rodrigues > wrote: >> When I use optimize.fsolve(...) on windows, python interpreter crashes >> without any error msg. >> I am using pywin32 build 210, scipy version 0.6.0, windows xp. > > Eduardo, > > Can you provide some sample code that produces the crash? > > Thanks, > Alan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From jeevan.baretto at gmail.com Wed May 21 12:34:26 2008 From: jeevan.baretto at gmail.com (Jeevan Baretto) Date: Wed, 21 May 2008 22:04:26 +0530 Subject: [SciPy-user] optimize.fsolve In-Reply-To: <006801c8bb5f$7ec18c80$a300a8c0@rodrigues> References: <90c482ab0805160603o3543aed8g91132da15aead5ed@mail.gmail.com> <005d01c8bb56$48715e70$a300a8c0@rodrigues> <1d36917a0805210831g35f44ffbha33481efc82a4dc6@mail.gmail.com> <006801c8bb5f$7ec18c80$a300a8c0@rodrigues> Message-ID: <46f941590805210934i3bf085dy47ab557d8af547d8@mail.gmail.com> Yes it happened with me too. I use Windows at home and linux at office. Not only fsolve, other scipy.optimize modules too crash while I work Windows. I am forced to work on linux for that reason. Some one please look into it. Thanks, Jeevan On Wed, May 21, 2008 at 9:56 PM, Eduardo Rodrigues wrote: > For example: > > >>> from scipy import * > >>> def Eq1(x): > ... x**2+2.0*x-3 > ... > >>> optimize.fsolve(Eq1,0) > > Regards. > > ----- Original Message ----- > From: "Alan McIntyre" > To: "SciPy Users List" > Sent: Wednesday, May 21, 2008 12:31 PM > Subject: Re: [SciPy-user] optimize.fsolve > > > > On Wed, May 21, 2008 at 11:21 AM, Eduardo Rodrigues > > wrote: > >> When I use optimize.fsolve(...) on windows, python interpreter crashes > >> without any error msg. > >> I am using pywin32 build 210, scipy version 0.6.0, windows xp. > > > > Eduardo, > > > > Can you provide some sample code that produces the crash? > > > > Thanks, > > Alan > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.huard at gmail.com Wed May 21 12:59:33 2008 From: david.huard at gmail.com (David Huard) Date: Wed, 21 May 2008 12:59:33 -0400 Subject: [SciPy-user] Is there a bicubic interpolation function in scipy? In-Reply-To: <9457e7c80805210054u772281d6k978d54b9e514a6ea@mail.gmail.com> References: <90c482ab0805201953o3a17fdb1n3ae08c3cb92e1bef@mail.gmail.com> <9457e7c80805210054u772281d6k978d54b9e514a6ea@mail.gmail.com> Message-ID: <91cf711d0805210959j5c9352ebx43310e21bced8262@mail.gmail.com> And if you like fortran, I wrote a python wrapper for the fortran implementation of the Akima bicubic interpolator (ACM 760). David 2008/5/21 St?fan van der Walt : > 2008/5/21 zhang chi : > > I want to use a bicubic interpolation function to process a image, is > > there a a bicubic interpolation function in scipy? > > There are a couple of options available. In ndimage, you can specify > an `order` parameter to most functions, which determines the order of > the splines used for interpolation, e.g. > > x = np.array([1,2,3.]) > ndimage.zoom(x,5/3.,order=1) > > In `scipy.interpolate`, you also have `interp2d` and > SmoothBivariateSpline (with kx=1,ky=1). > > Furthermore, if you need C code I can provide you with either ctypes > or cython examples. > > Regards > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dharhas.Pothina at twdb.state.tx.us Wed May 21 13:05:35 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 21 May 2008 12:05:35 -0500 Subject: [SciPy-user] Generating a 4D array from a set of 3D arrays Message-ID: <4834100F0200009B00012CAE@GWWEB.twdb.state.tx.us> Hi, I'm trying to read 3 dimensional time series data from a file and store it in a numpy array so I can analyze the data. I'm having problems working out how to append the 3D array from each timestep to make a 4D array. I worked out that I could make a list of 3D arrays but if I do that I'm having issues slicing it the way I need to. My final need is an array data_array[time,level,node,var] that I can slice by saying data_array[:,1,23,1] to get a time history at level=1,node=23,var=1 etc or I need to know how to slice a list (data_list[:][1,23,1] gives an error) i.e : a = array([[[ 1, 2],[ 3, 4],[ 5, 6]],[[101, 102],[103, 104],[105, 106]],[[201, 202],[203, 204],[205, 206]]]) data_array=array([]) data_list=[] newdata = a for i in arange(5): data_array = somefunction(data,newdata) # I've tried hstack,vstack,dstack,array etc data_array[i] = newdata # this is what I would do in matlab but doesn't work in numpy data_list[len(data_list):] = [newdata] # this works newdata = newdata + 1000 Any help would be greatly appreciated. thanks - dharhas From robince at gmail.com Wed May 21 13:08:45 2008 From: robince at gmail.com (Robin) Date: Wed, 21 May 2008 18:08:45 +0100 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices (SEGFAULT!) In-Reply-To: <9457e7c80805210829l29dbb9a4gc4afa70e45a7d129@mail.gmail.com> References: <1211320260.2760.2.camel@localhost.localdomain> <1d36917a0805210710t19d160dep6d67a39015fa9434@mail.gmail.com> <49B2E50A-1C2D-4424-8609-4984F0ED6C78@usgs.gov> <9457e7c80805210829l29dbb9a4gc4afa70e45a7d129@mail.gmail.com> Message-ID: On Wed, May 21, 2008 at 4:29 PM, St?fan van der Walt wrote: > Hi Mike > > 2008/5/21 Michael Hearne : >> If I try that on a larger example, similar to the second one I made >> >> yesterday: >> >> nrows = 648 >> >> ncols = 690 >> >> data = rand(nrows,ncols) >> >> i,j = (data < 0.14).nonzero() >> >> data[i,:][:,j] = data[i,:][:,j]*0 >> >> I get another segmentation fault. > > I can confirm that this bug exists in r5120. Should this even work? While it shouldn't segfault - I don't think it will do what is expected. I think it works for slices but when fancy indexing is used there is some __setitem__ trick if I remember correctly that doesn't work if you index twice. Michael - is there a reason you really need to keep i and j indices seperate or is it left over from Matlab? As Alan suggested you can use z = (data < 0.14).nonzero() or just index directly with boolean indexing. You can do operations involving the original array as well with boolean indexing: idx = data < 0.14 data[idx] *= 0 (in place multiplication) or data[idx] = 100*data[idx] + data[idx]**2 Even if the segfault you are seeing if fixed, I doubt if double indexing like that is the best way to achieve what your trying to do. Cheers Robin From robince at gmail.com Wed May 21 13:15:57 2008 From: robince at gmail.com (Robin) Date: Wed, 21 May 2008 18:15:57 +0100 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices (SEGFAULT!) In-Reply-To: References: <1211320260.2760.2.camel@localhost.localdomain> <1d36917a0805210710t19d160dep6d67a39015fa9434@mail.gmail.com> <49B2E50A-1C2D-4424-8609-4984F0ED6C78@usgs.gov> <9457e7c80805210829l29dbb9a4gc4afa70e45a7d129@mail.gmail.com> Message-ID: Sorry just re read your original mail and noticed that you are using Alans method. Still keep in mind you can index directly with the boolean result of the comparison which saves the nonzero() call if speed is an issue. Cheers Robin From robert.kern at gmail.com Wed May 21 13:45:20 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 21 May 2008 12:45:20 -0500 Subject: [SciPy-user] Generating a 4D array from a set of 3D arrays In-Reply-To: <4834100F0200009B00012CAE@GWWEB.twdb.state.tx.us> References: <4834100F0200009B00012CAE@GWWEB.twdb.state.tx.us> Message-ID: <3d375d730805211045j5ad2c161v9c7f4310f4919723@mail.gmail.com> On Wed, May 21, 2008 at 12:05 PM, Dharhas Pothina wrote: > Hi, > > I'm trying to read 3 dimensional time series data from a file and store it in a numpy array so I can analyze the data. I'm having problems working out how to append the 3D array from each timestep to make a 4D array. I worked out that I could make a list of 3D arrays but if I do that I'm having issues slicing it the way I need to. Keep appending to the list to build it up. Then make an array from that list using array(). from numpy import array a = array([[[ 1, 2],[ 3, 4],[ 5, 6]],[[101, 102],[103, 104],[105, 106]],[[201, 202],[203, 204],[205, 206]]]) data_list=[] newdata = a for i in arange(5): data_list.append(newdata) # This is the idiomatic way to append to a list. newdata = newdata + 1000 data_array = array(data_list) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Wed May 21 13:58:28 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 21 May 2008 13:58:28 -0400 Subject: [SciPy-user] Generating a 4D array from a set of 3D arrays In-Reply-To: <4834100F0200009B00012CAE@GWWEB.twdb.state.tx.us> References: <4834100F0200009B00012CAE@GWWEB.twdb.state.tx.us> Message-ID: 2008/5/21 Dharhas Pothina : > I'm trying to read 3 dimensional time series data from a file and store it in a numpy array so I can analyze the data. I'm having problems working out how to append the 3D array from each timestep to make a 4D array. I worked out that I could make a list of 3D arrays but if I do that I'm having issues slicing it the way I need to. As a general rule, growing arrays should be avoided; it's not a very efficient process. If you know the final shape of the 4D array, the best thing to do is to allocate it all at once, then assign to it: big = np.zeros((a,b,c,d))/0. for i in range(a): big[i,...] = read_3D_chunk() If you don't know the final shape, it's probably best to load the lot into a python list (which support efficient appending), then convert it to an array. array() should normally do this: big_list = [] for i in range(a): big.append(read_3D_chunk()) big = np.array(big_list) But array() is sometimes too clever for its own good, so I would be tempted to use concatenate along a new axis: big_list = [] for i in range(a): big.append(read_3D_chunk()[np.newaxis,...]) big = np.concatenate(tuple(big_list)) Good luck, Anne From Dharhas.Pothina at twdb.state.tx.us Wed May 21 14:07:26 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 21 May 2008 13:07:26 -0500 Subject: [SciPy-user] Generating a 4D array from a set of 3D arrays In-Reply-To: <3d375d730805211045j5ad2c161v9c7f4310f4919723@mail.gmail.com> References: <4834100F0200009B00012CAE@GWWEB.twdb.state.tx.us> <3d375d730805211045j5ad2c161v9c7f4310f4919723@mail.gmail.com> Message-ID: <48341E8E.63BA.009B.0@twdb.state.tx.us> Perfect. Thanks I knew there must be something simple I was missing. - dharhas >>> "Robert Kern" 5/21/2008 12:45 PM >>> On Wed, May 21, 2008 at 12:05 PM, Dharhas Pothina wrote: > Hi, > > I'm trying to read 3 dimensional time series data from a file and store it in a numpy array so I can analyze the data. I'm having problems working out how to append the 3D array from each timestep to make a 4D array. I worked out that I could make a list of 3D arrays but if I do that I'm having issues slicing it the way I need to. Keep appending to the list to build it up. Then make an array from that list using array(). from numpy import array a = array([[[ 1, 2],[ 3, 4],[ 5, 6]],[[101, 102],[103, 104],[105, 106]],[[201, 202],[203, 204],[205, 206]]]) data_list=[] newdata = a for i in arange(5): data_list.append(newdata) # This is the idiomatic way to append to a list. newdata = newdata + 1000 data_array = array(data_list) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From Dharhas.Pothina at twdb.state.tx.us Wed May 21 14:14:56 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 21 May 2008 13:14:56 -0500 Subject: [SciPy-user] Generating a 4D array from a set of 3D arrays In-Reply-To: References: <4834100F0200009B00012CAE@GWWEB.twdb.state.tx.us> Message-ID: <48342050.63BA.009B.0@twdb.state.tx.us> Thanks Anne, I don't know the final size so appending to a list is probably best. The array(big_list) seems to be working but for my future reference, what does the following do? big.append(read_3D_chunk()[np.newaxis,...]) - dharhas >>> "Anne Archibald" 5/21/2008 12:58 PM >>> 2008/5/21 Dharhas Pothina : > I'm trying to read 3 dimensional time series data from a file and store it in a numpy array so I can analyze the data. I'm having problems working out how to append the 3D array from each timestep to make a 4D array. I worked out that I could make a list of 3D arrays but if I do that I'm having issues slicing it the way I need to. As a general rule, growing arrays should be avoided; it's not a very efficient process. If you know the final shape of the 4D array, the best thing to do is to allocate it all at once, then assign to it: big = np.zeros((a,b,c,d))/0. for i in range(a): big[i,...] = read_3D_chunk() If you don't know the final shape, it's probably best to load the lot into a python list (which support efficient appending), then convert it to an array. array() should normally do this: big_list = [] for i in range(a): big.append(read_3D_chunk()) big = np.array(big_list) But array() is sometimes too clever for its own good, so I would be tempted to use concatenate along a new axis: big_list = [] for i in range(a): big.append(read_3D_chunk()[np.newaxis,...]) big = np.concatenate(tuple(big_list)) Good luck, Anne _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From peridot.faceted at gmail.com Wed May 21 14:36:59 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 21 May 2008 14:36:59 -0400 Subject: [SciPy-user] Generating a 4D array from a set of 3D arrays In-Reply-To: <48342050.63BA.009B.0@twdb.state.tx.us> References: <4834100F0200009B00012CAE@GWWEB.twdb.state.tx.us> <48342050.63BA.009B.0@twdb.state.tx.us> Message-ID: 2008/5/21 Dharhas Pothina : > > Thanks Anne, I don't know the final size so appending to a list is probably best. The array(big_list) seems to be working but for my future reference, what does the following do? > > big.append(read_3D_chunk()[np.newaxis,...]) Well, concatenate concatenates arrays along an already-existing axis, so the arrays need to be four-dimensional when I concatenate them. So I turn my three-dimensional arrays, of shape (a,b,c), into four-dimensional arrays of shape (1,a,b,c); I later concatenate along axis 0. The way to add this new length-1 axis is by using np.newaxis. For example, to turn an array A of shape (2,3) into an array of shape (2,1,3): A = A[:,np.newaxis,:] The "..." just means "as many : as needed"; very useful when writing generic functions, but here I was just too lazy to write [np.newaxis,:,:,:]. Anne From Dharhas.Pothina at twdb.state.tx.us Wed May 21 14:42:30 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 21 May 2008 13:42:30 -0500 Subject: [SciPy-user] Generating a 4D array from a set of 3D arrays In-Reply-To: References: <4834100F0200009B00012CAE@GWWEB.twdb.state.tx.us> <48342050.63BA.009B.0@twdb.state.tx.us> Message-ID: <483426C5.63BA.009B.0@twdb.state.tx.us> Thanks. I'm finding that scipy/numpy are very powerful (much more than matlab) but arrays/lists/tuples etc takes some getting used to. - dharhas >>> "Anne Archibald" 5/21/2008 1:36 PM >>> 2008/5/21 Dharhas Pothina : > > Thanks Anne, I don't know the final size so appending to a list is probably best. The array(big_list) seems to be working but for my future reference, what does the following do? > > big.append(read_3D_chunk()[np.newaxis,...]) Well, concatenate concatenates arrays along an already-existing axis, so the arrays need to be four-dimensional when I concatenate them. So I turn my three-dimensional arrays, of shape (a,b,c), into four-dimensional arrays of shape (1,a,b,c); I later concatenate along axis 0. The way to add this new length-1 axis is by using np.newaxis. For example, to turn an array A of shape (2,3) into an array of shape (2,1,3): A = A[:,np.newaxis,:] The "..." just means "as many : as needed"; very useful when writing generic functions, but here I was just too lazy to write [np.newaxis,:,:,:]. Anne _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From aisaac at american.edu Wed May 21 15:29:35 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 21 May 2008 15:29:35 -0400 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices In-Reply-To: References: <1E04AF86-C790-405C-81E5-7F5CCFD0C2A8@usgs.gov> Message-ID: On Tue, May 20, 2008 at 9:43 PM, Michael Hearne wrote: > Neilen - Thanks. Unfortunately, I've either discovered > a bug or have failed to understand how to use this > function. The below code snippet: > from pylab import * > import numpy > print numpy.__version__, numpy.__file__ > data = rand(648,690) > i,j = (data < 0.14).nonzero() > data[ix_(i,j)] = data[ix_(i,j)]*0 I believe your problem is with ``data[ix_(i,j)]``. Let's say that i and j are about 60,000 in length. So you are trying to create a 60k ? 60k array, which has 3600M elements. I'm guessing you do not have enough memory for this. Otherwise, it should work (although it is very wasteful). hth, Alan Isaac From johannes.stromberg at gmail.com Wed May 21 16:41:45 2008 From: johannes.stromberg at gmail.com (=?ISO-8859-1?Q?Johannes_Str=F6mberg?=) Date: Wed, 21 May 2008 22:41:45 +0200 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: References: Message-ID: Thanks a lot, it works like a charm. Does anyone know of an efficient way of implementing a threshold filter, i.e. where the resulting value is either the difference between the current value and the threshold (if the value is above the threshold) or otherwise 0? /Johannes On Wed, May 21, 2008 at 4:26 PM, Zachary Pincus wrote: > Hello, > >> When I use the asarray() method on my PIL image I get a 3-dimensional >> array, shape is (w, h, 3 [rgb-values]). >> >> I am wondering how I can transform this to something that is >> compatible with f.e. ndimage.gaussian_filter? (When I run it directly >> it simply removes color from the image). > > ndimage works as the name implies: it is a library for dealing with n- > dimensional images. So in this case, it treated your input as a [w,h, > 3] array of voxels (a 3D image), and then applied the gaussian > smoothing of the requested variance across all three dimensions. The > result of which was probably (a) some degree of smoothing in the x and > y dimensions (as requested), and (b) the same degree of smoothing > across the "z" (color channel) dimension, which (as it is only "3 > voxels high" translates into quite a bit of mixing of the pixels). > This "z smoothing" of course translates into mixing the color > channels, probably by quite a large degree if your gaussian variance > was anything above one pixel. Which would give the effect of "removing > the color" from the image. > > So, you need to apply the gaussian filter to each channel > independently. You could either do this with a for loop, or even > easier, pass a tuple to the sigma parameter to describe the requested > variance in each dimension. To smooth by a 2-pixel stdev gaussian in > w, a 4-pixel gaussian in h, and to do no smoothing across color > channels, just pass 'sigma=[2,4,0]' to gaussian_filter. > > Zach > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From amcmorl at gmail.com Wed May 21 17:01:10 2008 From: amcmorl at gmail.com (Angus McMorland) Date: Wed, 21 May 2008 17:01:10 -0400 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: References: Message-ID: 2008/5/21 Johannes Str?mberg : > Does anyone know of an efficient way of implementing a threshold > filter, i.e. where the resulting value is either the difference > between the current value and the threshold (if the value is above the > threshold) or otherwise 0? How about: filtered = np.where(img > thr, img - thr, 0) ? -- AJC McMorland, PhD candidate Physiology, University of Auckland (Nearly) post-doctoral research fellow Neurobiology, University of Pittsburgh From stefan at sun.ac.za Wed May 21 17:04:36 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 21 May 2008 23:04:36 +0200 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: References: Message-ID: <9457e7c80805211404t2ea8b535lbc3569d6193a735f@mail.gmail.com> Hi Johannes 2008/5/21 Johannes Str?mberg : > Thanks a lot, it works like a charm. > > Does anyone know of an efficient way of implementing a threshold > filter, i.e. where the resulting value is either the difference > between the current value and the threshold (if the value is above the > threshold) or otherwise 0? Unless you have really large data-sets, you can do: mask = x > threshold x[~mask] = 0 x[mask] -= threshold Regards St?fan From peridot.faceted at gmail.com Wed May 21 17:15:31 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 21 May 2008 17:15:31 -0400 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: <9457e7c80805211404t2ea8b535lbc3569d6193a735f@mail.gmail.com> References: <9457e7c80805211404t2ea8b535lbc3569d6193a735f@mail.gmail.com> Message-ID: 2008/5/21 St?fan van der Walt : > 2008/5/21 Johannes Str?mberg : >> Thanks a lot, it works like a charm. >> >> Does anyone know of an efficient way of implementing a threshold >> filter, i.e. where the resulting value is either the difference >> between the current value and the threshold (if the value is above the >> threshold) or otherwise 0? > > Unless you have really large data-sets, you can do: > > mask = x > threshold > x[~mask] = 0 > x[mask] -= threshold Or if you want it inplace: np.subtract(x,threshold,x) np.maximum(x,0,x) Anne From stefan at sun.ac.za Wed May 21 17:42:00 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 21 May 2008 23:42:00 +0200 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: References: <9457e7c80805211404t2ea8b535lbc3569d6193a735f@mail.gmail.com> Message-ID: <9457e7c80805211442lf01a230i8638f807235cf3b9@mail.gmail.com> 2008/5/21 Anne Archibald : > 2008/5/21 St?fan van der Walt : > >> 2008/5/21 Johannes Str?mberg : >>> Thanks a lot, it works like a charm. >>> >>> Does anyone know of an efficient way of implementing a threshold >>> filter, i.e. where the resulting value is either the difference >>> between the current value and the threshold (if the value is above the >>> threshold) or otherwise 0? >> >> Unless you have really large data-sets, you can do: >> >> mask = x > threshold >> x[~mask] = 0 >> x[mask] -= threshold > > Or if you want it inplace: > np.subtract(x,threshold,x) > np.maximum(x,0,x) That's a good idea. It is faster than `where`, too. Regards St?fan From aisaac at american.edu Wed May 21 18:58:17 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 21 May 2008 18:58:17 -0400 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: <9457e7c80805211442lf01a230i8638f807235cf3b9@mail.gmail.com> References: <9457e7c80805211404t2ea8b535lbc3569d6193a735f@mail.gmail.com> <9457e7c80805211442lf01a230i8638f807235cf3b9@mail.gmail.com> Message-ID: >>> x[mask] -= threshold 2008/5/21 Anne Archibald : >> Or if you want it inplace: >> np.subtract(x,threshold,x) Could you please elaborate on the differences between these? Thank you, Alan Isaac From robert.kern at gmail.com Wed May 21 19:01:07 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 21 May 2008 18:01:07 -0500 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: References: <9457e7c80805211404t2ea8b535lbc3569d6193a735f@mail.gmail.com> <9457e7c80805211442lf01a230i8638f807235cf3b9@mail.gmail.com> Message-ID: <3d375d730805211601we78ab47tf5396cfcafda2a93@mail.gmail.com> On Wed, May 21, 2008 at 5:58 PM, Alan G Isaac wrote: >>>> x[mask] -= threshold > > 2008/5/21 Anne Archibald : >>> Or if you want it inplace: >>> np.subtract(x,threshold,x) > > Could you please elaborate on the differences between these? One is masked and the other isn't. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cohen at slac.stanford.edu Wed May 21 19:02:13 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Thu, 22 May 2008 01:02:13 +0200 Subject: [SciPy-user] how to resolve a cubic equation in scipy? In-Reply-To: References: <90c482ab0805160603o3543aed8g91132da15aead5ed@mail.gmail.com> Message-ID: <4834A9F5.4090507@slac.stanford.edu> hi, just for fun I compared the 3 methods with timeit I got : [cohen at jarrett ~]$ ipython testSolveCubic.py using fsolve: 2.44568693982 1.84741111298e-12 time= 0.000438928604126 using roots: [-1.22284347+2.30637633j -1.22284347-2.30637633j 2.44568694+0.j ] time= 0.002277135849 using brentq 2.44568693982 time= 3.31401824951e-05 So brentq is indeed by far the fastest. I am slightly surprised that fsolve outperforms roots, but I guess it could be due to the fact that the problem is too simple... Anyway, for completeness here is the script : from scipy import optimize,roots import timeit def myCubicEq(r): return 1.2*r**3 + r - 20 print 'using fsolve:' start=timeit.time.time() results=optimize.fsolve(myCubicEq, 5) stop=timeit.time.time() print results,myCubicEq(results) print 'time= ',stop-start print '\n' print 'using roots:' start=timeit.time.time() results2=roots([1.2, 0, 1, -20]) stop=timeit.time.time() print results2 print 'time= ',stop-start print '\n' print 'using brentq' start=timeit.time.time() results3=optimize.brentq(myCubicEq,0,10) stop=timeit.time.time() print results3 print 'time= ',stop-start cheers, Johann Anne Archibald wrote: > 2008/5/16 Pauli Virtanen : > >> Fri, 16 May 2008 21:03:22 +0800, zhang chi wrote: >> >>> I want to resolve a cubic equation 1.2r^3 + r - 20 = 0. >>> >>>>> scipy.roots([1.2, 0, 1, -20]) >>>>> >> array([-1.22284347+2.30637633j, -1.22284347-2.30637633j, >> 2.44568694+0.j ]) >> > > Given that the equation has just one real root, scipy's root-finders > (e.g. brentq) should be reliable and fast. > > Of course, if the OP really only has the one equation to solve, it's > done. But presumably they're working with a family of related > cubics... > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Wed May 21 19:17:47 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 21 May 2008 18:17:47 -0500 Subject: [SciPy-user] how to resolve a cubic equation in scipy? In-Reply-To: <4834A9F5.4090507@slac.stanford.edu> References: <90c482ab0805160603o3543aed8g91132da15aead5ed@mail.gmail.com> <4834A9F5.4090507@slac.stanford.edu> Message-ID: <3d375d730805211617h3c745280te5722073bee78816@mail.gmail.com> On Wed, May 21, 2008 at 6:02 PM, Johann Cohen-Tanugi wrote: > hi, > just for fun I compared the 3 methods with timeit I got : > [cohen at jarrett ~]$ ipython testSolveCubic.py > using fsolve: > 2.44568693982 1.84741111298e-12 > time= 0.000438928604126 > > using roots: > [-1.22284347+2.30637633j -1.22284347-2.30637633j 2.44568694+0.j ] > time= 0.002277135849 > > using brentq > 2.44568693982 > time= 3.31401824951e-05 > > So brentq is indeed by far the fastest. I am slightly surprised that > fsolve outperforms roots, but I guess it could be due to the fact that > the problem is too simple... Quite possibly. It's also worth noting that roots() actually gives you all three roots even if they are complex where the others can only give you one (real) root. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Wed May 21 21:11:05 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 21 May 2008 21:11:05 -0400 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: <3d375d730805211601we78ab47tf5396cfcafda2a93@mail.gmail.com> References: <9457e7c80805211404t2ea8b535lbc3569d6193a735f@mail.gmail.com><9457e7c80805211442lf01a230i8638f807235cf3b9@mail.gmail.com> <3d375d730805211601we78ab47tf5396cfcafda2a93@mail.gmail.com> Message-ID: >>>>> x[mask] -= threshold >> 2008/5/21 Anne Archibald : >>>> Or if you want it inplace: >>>> np.subtract(x,threshold,x) > On Wed, May 21, 2008 at 5:58 PM, Alan G Isaac > wrote: >> Could you please elaborate on the differences between these? On Wed, 21 May 2008, Robert Kern apparently wrote: > One is masked and the other isn't. Well, OK. Trying again ... Why did Anne switch to ``subtract`` in order to illustrate in place subtraction? Thanks, Alan From shao at msg.ucsf.edu Wed May 21 21:58:28 2008 From: shao at msg.ucsf.edu (Lin Shao) Date: Wed, 21 May 2008 18:58:28 -0700 Subject: [SciPy-user] scipy.test() Need nose >= 0.10 Message-ID: With scipy version 0.7.0.dev4377, I got this ImportError when trying to run scipy.test(): Traceback (most recent call last): File "", line 1, in File "/jws30/haase/Priithon_25_lin/scipy/testing/nulltester.py", line 14, in test 'http://somethingaboutorange.com/mrl/projects/nose' ImportError: Need nose >=0.10 for tests - see http://somethingaboutorange.com/mrl/projects/nose Any suggestions? --lin From peridot.faceted at gmail.com Wed May 21 22:00:06 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 21 May 2008 22:00:06 -0400 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: References: <9457e7c80805211404t2ea8b535lbc3569d6193a735f@mail.gmail.com> <9457e7c80805211442lf01a230i8638f807235cf3b9@mail.gmail.com> <3d375d730805211601we78ab47tf5396cfcafda2a93@mail.gmail.com> Message-ID: 2008/5/21 Alan G Isaac : >>>>>> x[mask] -= threshold > >>> 2008/5/21 Anne Archibald : >>>>> Or if you want it inplace: >>>>> np.subtract(x,threshold,x) > > >> On Wed, May 21, 2008 at 5:58 PM, Alan G Isaac >> wrote: >>> Could you please elaborate on the differences between these? > > > On Wed, 21 May 2008, Robert Kern apparently wrote: >> One is masked and the other isn't. > > > Well, OK. > > Trying again ... > > Why did Anne switch to ``subtract`` in order to > illustrate in place subtraction? Mostly because I was going to have to use maximum() that way, so it seemed more consistent. I think - might be worth checking - that the performance of A -= 3 and np.subtract(A,3,A) should be identical. They might even call the same code. Anne From robert.kern at gmail.com Wed May 21 22:01:10 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 21 May 2008 21:01:10 -0500 Subject: [SciPy-user] scipy.test() Need nose >= 0.10 In-Reply-To: References: Message-ID: <3d375d730805211901h2ef34c1fx7601088e92a5dc92@mail.gmail.com> On Wed, May 21, 2008 at 8:58 PM, Lin Shao wrote: > With scipy version 0.7.0.dev4377, I got this ImportError when trying > to run scipy.test(): > > Traceback (most recent call last): > File "", line 1, in > File "/jws30/haase/Priithon_25_lin/scipy/testing/nulltester.py", > line 14, in test > 'http://somethingaboutorange.com/mrl/projects/nose' > ImportError: Need nose >=0.10 for tests - see > http://somethingaboutorange.com/mrl/projects/nose > > Any suggestions? Install nose. http://www.somethingaboutorange.com/mrl/projects/nose/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From shao at msg.ucsf.edu Wed May 21 22:17:12 2008 From: shao at msg.ucsf.edu (Lin Shao) Date: Wed, 21 May 2008 19:17:12 -0700 Subject: [SciPy-user] how to build in fftw3 support? Message-ID: Hello I have fftw3 installed and setup.py seems to have found it: fftw3_info: libraries fftw3 not found in /usr/local/lib libraries fftw3 not found in /opt/lib FOUND: libraries = ['fftw3'] library_dirs = ['/usr/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/include'] But I don't see fftw3 in my scipy build. Is there anything extra I need to do? Thanks. --lin From aisaac at american.edu Wed May 21 22:21:15 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 21 May 2008 22:21:15 -0400 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: References: <9457e7c80805211404t2ea8b535lbc3569d6193a735f@mail.gmail.com><9457e7c80805211442lf01a230i8638f807235cf3b9@mail.gmail.com><3d375d730805211601we78ab47tf5396cfcafda2a93@mail.gmail.com> Message-ID: On Wed, 21 May 2008, Anne Archibald apparently wrote: > I think - might be worth checking - that the performance > of A -= 3 and np.subtract(A,3,A) should be identical. OK, thanks. I assumed the first should be at least as good, and that's why I asked about the change. Cheers, Alan From robert.kern at gmail.com Wed May 21 22:21:26 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 21 May 2008 21:21:26 -0500 Subject: [SciPy-user] how to build in fftw3 support? In-Reply-To: References: Message-ID: <3d375d730805211921x633bdef1k96e218bedacde740@mail.gmail.com> On Wed, May 21, 2008 at 9:17 PM, Lin Shao wrote: > Hello > > I have fftw3 installed and setup.py seems to have found it: > > fftw3_info: > libraries fftw3 not found in /usr/local/lib > libraries fftw3 not found in /opt/lib > FOUND: > libraries = ['fftw3'] > library_dirs = ['/usr/lib'] > define_macros = [('SCIPY_FFTW3_H', None)] > include_dirs = ['/usr/include'] > > But I don't see fftw3 in my scipy build. There is no scipy.fftw3 package, if that is what you were looking for. Everything is under scipy.fftpack. In particular, you can use the program ldd to double-check that the file scipy/fftpack/_fftpack.so is linked to libfftw3. > Is there anything extra I > need to do? Thanks. Probably not. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at enthought.com Thu May 22 00:00:45 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 21 May 2008 23:00:45 -0500 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices (SEGFAULT!) In-Reply-To: <49B2E50A-1C2D-4424-8609-4984F0ED6C78@usgs.gov> References: <1211320260.2760.2.camel@localhost.localdomain> <1d36917a0805210710t19d160dep6d67a39015fa9434@mail.gmail.com> <49B2E50A-1C2D-4424-8609-4984F0ED6C78@usgs.gov> Message-ID: <4834EFED.1030404@enthought.com> Michael Hearne wrote: > Thanks for the help - the method below works just fine in my real > application (which has nothing to do with setting elements of an array > to zero - that was just a simple example of something to _do_ with the > data being indexed). > > However, I am still concerned about the larger problem of getting a > segfault using ANY method of indexing an array. If it is user error > that is causing the problem, then shouldn't I get an exception that > tells me my syntax is somehow incorrect? > > I've added SEGFAULT to the subject line in hopes that someone > responsible for the core NumPy code (Travis O., perhaps?) will take > notice and address the issue. If that happens, once again, I'm happy > to help test on my version of NumPy wherever needed. > > Thanks for all the suggestions, > > Mike > On May 21, 2008, at 8:10 AM, Alan McIntyre wrote: > >> On Wed, May 21, 2008 at 9:51 AM, Michael Hearne > > wrote: >>> If I try that on a larger example, similar to the second one I made >>> yesterday: >>> nrows = 648 >>> ncols = 690 >>> data = rand(nrows,ncols) >>> i,j = (data < 0.14).nonzero() >>> data[i,:][:,j] = data[i,:][:,j]*0 Just a suggestion, but you will get the attention of the NumPy developers more quickly on the numpy-discussion at scipy.org mailing list This kind of indexing is going to be very slow and it won't do what you want. A couple of points: data[i,:] will return a copy and then [:,j] will set values into that copy of the data (which is then not bound to anything so you lose it). I can't think of any example that I would use data[][] = You are right that it should not segfault. How much memory do you have? My first guess is that you are running out of memory and there is some malloc that is not being checked correctly, If you can run the code under gdb and give a backtrace when the error occurs it would be very helpful to trac it down. Thanks, -Travis From stefan at sun.ac.za Thu May 22 01:58:20 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 22 May 2008 07:58:20 +0200 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices (SEGFAULT!) In-Reply-To: <4834EFED.1030404@enthought.com> References: <1211320260.2760.2.camel@localhost.localdomain> <1d36917a0805210710t19d160dep6d67a39015fa9434@mail.gmail.com> <49B2E50A-1C2D-4424-8609-4984F0ED6C78@usgs.gov> <4834EFED.1030404@enthought.com> Message-ID: <9457e7c80805212258g7813071fifb083ce12e7c3ec6@mail.gmail.com> 2008/5/22 Travis E. Oliphant : > Michael Hearne wrote: >> Thanks for the help - the method below works just fine in my real >> application (which has nothing to do with setting elements of an array >> to zero - that was just a simple example of something to _do_ with the >> data being indexed). >> >> However, I am still concerned about the larger problem of getting a >> segfault using ANY method of indexing an array. If it is user error >> that is causing the problem, then shouldn't I get an exception that >> tells me my syntax is somehow incorrect? >> >> I've added SEGFAULT to the subject line in hopes that someone >> responsible for the core NumPy code (Travis O., perhaps?) will take >> notice and address the issue. If that happens, once again, I'm happy >> to help test on my version of NumPy wherever needed. >> >> Thanks for all the suggestions, >> >> Mike >> On May 21, 2008, at 8:10 AM, Alan McIntyre wrote: >> >>> On Wed, May 21, 2008 at 9:51 AM, Michael Hearne >> > wrote: >>>> If I try that on a larger example, similar to the second one I made >>>> yesterday: >>>> nrows = 648 >>>> ncols = 690 >>>> data = rand(nrows,ncols) >>>> i,j = (data < 0.14).nonzero() >>>> data[i,:][:,j] = data[i,:][:,j]*0 > Just a suggestion, but you will get the attention of the NumPy > developers more quickly on the numpy-discussion at scipy.org mailing list > > This kind of indexing is going to be very slow and it won't do what you > want. > > A couple of points: > > data[i,:] will return a copy and then [:,j] will set values into that > copy of the data (which is then not bound to anything so you lose it). > I can't think of any example that I would use > > data[][] = > > You are right that it should not segfault. How much memory do you > have? My first guess is that you are running out of memory and there > is some malloc that is not being checked correctly, If you can run the > code under gdb and give a backtrace when the error occurs it would be > very helpful to trac it down. This is where it happens: #0 0x00509c8f in DOUBLE_copyswap (dst=0x6ccaa000, src=0x16d0058, swap=0, arr=0x489460) at arraytypes.inc.src:993 #1 0x00568ce7 in array_subscript (self=, op=) at arrayobject.c:2556 #2 0x00569042 in array_subscript_nice (self=0x48a620, op=0x77aa8) at arrayobject.c:3178 Regards St?fan From oliphant at enthought.com Thu May 22 02:37:01 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Thu, 22 May 2008 01:37:01 -0500 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices (SEGFAULT!) In-Reply-To: <49B2E50A-1C2D-4424-8609-4984F0ED6C78@usgs.gov> References: <1211320260.2760.2.camel@localhost.localdomain> <1d36917a0805210710t19d160dep6d67a39015fa9434@mail.gmail.com> <49B2E50A-1C2D-4424-8609-4984F0ED6C78@usgs.gov> Message-ID: <4835148D.9000807@enthought.com> Michael Hearne wrote: > Thanks for the help - the method below works just fine in my real > application (which has nothing to do with setting elements of an array > to zero - that was just a simple example of something to _do_ with the > data being indexed). > > However, I am still concerned about the larger problem of getting a > segfault using ANY method of indexing an array. If it is user error > that is causing the problem, then shouldn't I get an exception that > tells me my syntax is somehow incorrect? > > I've added SEGFAULT to the subject line in hopes that someone > responsible for the core NumPy code (Travis O., perhaps?) will take > notice and address the issue. If that happens, once again, I'm happy > to help test on my version of NumPy wherever needed. > Thank you for finding this bug. It is a bug due to over-flow calculations causing a loop not to terminate correctly (therefore walking over available memory). It should be fixed in latest SVN version of NumPy. -Travis From stefan at sun.ac.za Thu May 22 02:58:35 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 22 May 2008 08:58:35 +0200 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices (SEGFAULT!) In-Reply-To: <4835148D.9000807@enthought.com> References: <1211320260.2760.2.camel@localhost.localdomain> <1d36917a0805210710t19d160dep6d67a39015fa9434@mail.gmail.com> <49B2E50A-1C2D-4424-8609-4984F0ED6C78@usgs.gov> <4835148D.9000807@enthought.com> Message-ID: <9457e7c80805212358p22cf8f51o28a807d4b0dff4f1@mail.gmail.com> 2008/5/22 Travis E. Oliphant : > Michael Hearne wrote: >> Thanks for the help - the method below works just fine in my real >> application (which has nothing to do with setting elements of an array >> to zero - that was just a simple example of something to _do_ with the >> data being indexed). >> >> However, I am still concerned about the larger problem of getting a >> segfault using ANY method of indexing an array. If it is user error >> that is causing the problem, then shouldn't I get an exception that >> tells me my syntax is somehow incorrect? >> >> I've added SEGFAULT to the subject line in hopes that someone >> responsible for the core NumPy code (Travis O., perhaps?) will take >> notice and address the issue. If that happens, once again, I'm happy >> to help test on my version of NumPy wherever needed. >> > Thank you for finding this bug. It is a bug due to over-flow > calculations causing a loop not to terminate correctly (therefore > walking over available memory). It should be fixed in latest SVN > version of NumPy. And many bonus points to you for fixing the bug, as well as adding a regression test. Cheers St?fan From jr at sun.ac.za Thu May 22 03:32:16 2008 From: jr at sun.ac.za (Johann Rohwer) Date: Thu, 22 May 2008 09:32:16 +0200 Subject: [SciPy-user] optimize.fsolve In-Reply-To: <46f941590805210934i3bf085dy47ab557d8af547d8@mail.gmail.com> References: <90c482ab0805160603o3543aed8g91132da15aead5ed@mail.gmail.com> <005d01c8bb56$48715e70$a300a8c0@rodrigues> <1d36917a0805210831g35f44ffbha33481efc82a4dc6@mail.gmail.com> <006801c8bb5f$7ec18c80$a300a8c0@rodrigues> <46f941590805210934i3bf085dy47ab557d8af547d8@mail.gmail.com> Message-ID: <48352180.6040804@sun.ac.za> Jeevan Baretto wrote: > >>> from scipy import * > >>> def Eq1(x): > ... x**2+2.0*x-3 Shouldn't you be writing ... return x**2+2.0*x-3 in the last line? Johann From haase at msg.ucsf.edu Thu May 22 03:42:51 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 22 May 2008 09:42:51 +0200 Subject: [SciPy-user] scipy.test() Need nose >= 0.10 In-Reply-To: <3d375d730805211901h2ef34c1fx7601088e92a5dc92@mail.gmail.com> References: <3d375d730805211901h2ef34c1fx7601088e92a5dc92@mail.gmail.com> Message-ID: On Thu, May 22, 2008 at 4:01 AM, Robert Kern wrote: > On Wed, May 21, 2008 at 8:58 PM, Lin Shao wrote: >> With scipy version 0.7.0.dev4377, I got this ImportError when trying >> to run scipy.test(): >> >> Traceback (most recent call last): >> File "", line 1, in >> File "/jws30/haase/Priithon_25_lin/scipy/testing/nulltester.py", >> line 14, in test >> 'http://somethingaboutorange.com/mrl/projects/nose' >> ImportError: Need nose >=0.10 for tests - see >> http://somethingaboutorange.com/mrl/projects/nose >> >> Any suggestions? > > Install nose. > > http://www.somethingaboutorange.com/mrl/projects/nose/ > Is this now a (hard) dependency of SciPy !? My understanding that SciPy would not have any dependencies (besides numpy of course) .... not even BLAS ... -Sebastian From robert.kern at gmail.com Thu May 22 03:51:30 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 May 2008 02:51:30 -0500 Subject: [SciPy-user] scipy.test() Need nose >= 0.10 In-Reply-To: References: <3d375d730805211901h2ef34c1fx7601088e92a5dc92@mail.gmail.com> Message-ID: <3d375d730805220051n4a3e2411x503ea3d41c9914f@mail.gmail.com> On Thu, May 22, 2008 at 2:42 AM, Sebastian Haase wrote: >> Install nose. >> >> http://www.somethingaboutorange.com/mrl/projects/nose/ >> > Is this now a (hard) dependency of SciPy !? For running the unit tests, yes. > My understanding that SciPy would not have any dependencies (besides > numpy of course) .... not even BLAS ... We're relaxing that for the unit tests (and just the unit tests). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From matthieu.brucher at gmail.com Thu May 22 03:53:20 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 22 May 2008 09:53:20 +0200 Subject: [SciPy-user] scipy.test() Need nose >= 0.10 In-Reply-To: References: <3d375d730805211901h2ef34c1fx7601088e92a5dc92@mail.gmail.com> Message-ID: 2008/5/22 Sebastian Haase : > On Thu, May 22, 2008 at 4:01 AM, Robert Kern > wrote: > > On Wed, May 21, 2008 at 8:58 PM, Lin Shao wrote: > >> With scipy version 0.7.0.dev4377, I got this ImportError when trying > >> to run scipy.test(): > >> > >> Traceback (most recent call last): > >> File "", line 1, in > >> File "/jws30/haase/Priithon_25_lin/scipy/testing/nulltester.py", > >> line 14, in test > >> 'http://somethingaboutorange.com/mrl/projects/nose' > >> ImportError: Need nose >=0.10 for tests - see > >> http://somethingaboutorange.com/mrl/projects/nose > >> > >> Any suggestions? > > > > Install nose. > > > > http://www.somethingaboutorange.com/mrl/projects/nose/ > > > Is this now a (hard) dependency of SciPy !? > My understanding that SciPy would not have any dependencies (besides > numpy of course) .... not even BLAS ... > If you want to run the tests, you have to use nose. If you don't want to, you don't need nose. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Thu May 22 03:57:20 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 22 May 2008 00:57:20 -0700 Subject: [SciPy-user] scipy.test() Need nose >= 0.10 In-Reply-To: References: <3d375d730805211901h2ef34c1fx7601088e92a5dc92@mail.gmail.com> Message-ID: On Thu, May 22, 2008 at 12:42 AM, Sebastian Haase wrote: > Is this now a (hard) dependency of SciPy !? > My understanding that SciPy would not have any dependencies (besides > numpy of course) .... not even BLAS ... If you want to run the tests, you have to have nose. Starting with NumPy 1.2, NumPy will also use nose for testing. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From david at ar.media.kyoto-u.ac.jp Thu May 22 03:50:36 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 22 May 2008 16:50:36 +0900 Subject: [SciPy-user] scipy.test() Need nose >= 0.10 In-Reply-To: References: <3d375d730805211901h2ef34c1fx7601088e92a5dc92@mail.gmail.com> Message-ID: <483525CC.9080107@ar.media.kyoto-u.ac.jp> Sebastian Haase wrote: > Is this now a (hard) dependency of SciPy !? > My understanding that SciPy would not have any dependencies (besides > numpy of course) .... I think scipy has always depended on BLAS and LAPACK. cheers, David From stefan at sun.ac.za Thu May 22 04:07:08 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 22 May 2008 10:07:08 +0200 Subject: [SciPy-user] scipy.test() Need nose >= 0.10 In-Reply-To: References: <3d375d730805211901h2ef34c1fx7601088e92a5dc92@mail.gmail.com> Message-ID: <9457e7c80805220107te46adaeh2a9100dc3acb6733@mail.gmail.com> 2008/5/22 Sebastian Haase : >> Install nose. >> >> http://www.somethingaboutorange.com/mrl/projects/nose/ >> > Is this now a (hard) dependency of SciPy !? > My understanding that SciPy would not have any dependencies (besides > numpy of course) .... not even BLAS ... For testing, it is. It's pure Python and you can easy-install it if you wish. Regards St?fan From oliphant at enthought.com Thu May 22 11:08:18 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Thu, 22 May 2008 10:08:18 -0500 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices (SEGFAULT!) In-Reply-To: <9457e7c80805212358p22cf8f51o28a807d4b0dff4f1@mail.gmail.com> References: <1211320260.2760.2.camel@localhost.localdomain> <1d36917a0805210710t19d160dep6d67a39015fa9434@mail.gmail.com> <49B2E50A-1C2D-4424-8609-4984F0ED6C78@usgs.gov> <4835148D.9000807@enthought.com> <9457e7c80805212358p22cf8f51o28a807d4b0dff4f1@mail.gmail.com> Message-ID: <48358C62.8040406@enthought.com> St?fan van der Walt wrote: > 2008/5/22 Travis E. Oliphant : > >> Michael Hearne wrote: >> >>> Thanks for the help - the method below works just fine in my real >>> application (which has nothing to do with setting elements of an array >>> to zero - that was just a simple example of something to _do_ with the >>> data being indexed). >>> >>> However, I am still concerned about the larger problem of getting a >>> segfault using ANY method of indexing an array. If it is user error >>> that is causing the problem, then shouldn't I get an exception that >>> tells me my syntax is somehow incorrect? >>> >>> I've added SEGFAULT to the subject line in hopes that someone >>> responsible for the core NumPy code (Travis O., perhaps?) will take >>> notice and address the issue. If that happens, once again, I'm happy >>> to help test on my version of NumPy wherever needed. >>> >>> >> Thank you for finding this bug. It is a bug due to over-flow >> calculations causing a loop not to terminate correctly (therefore >> walking over available memory). It should be fixed in latest SVN >> version of NumPy. >> > > And many bonus points to you for fixing the bug, as well as adding a > regression test. > I did that just for you even if it was a bleary-eyed 1:30 am ;-) -Travis From Dharhas.Pothina at twdb.state.tx.us Thu May 22 11:46:01 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Thu, 22 May 2008 10:46:01 -0500 Subject: [SciPy-user] Slicing 3D array using two 1D arrays not working Message-ID: <48354EE90200009B00012D41@GWWEB.twdb.state.tx.us> Hi, I have an array a 3D array 'a' (a.shape = a(6, 21216, 2) ) and I want to slice it using a[levels,nodes,:] where levels=array([2,4]),nodes=array([12,1234,4566,1233]) etc. I can do b=a[:,nodes,:] c=b[levels,:,:] but if I try c=a[levels,nodes,:] ValueError: shape mismatch: objects cannot be broadcast to a single shape is there something I am doing wrong or is this just not possible? thanks - dharhas From oliphant at enthought.com Thu May 22 13:07:30 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Thu, 22 May 2008 12:07:30 -0500 Subject: [SciPy-user] Slicing 3D array using two 1D arrays not working In-Reply-To: <48354EE90200009B00012D41@GWWEB.twdb.state.tx.us> References: <48354EE90200009B00012D41@GWWEB.twdb.state.tx.us> Message-ID: <4835A852.2000008@enthought.com> Dharhas Pothina wrote: > Hi, > > I have an array a 3D array 'a' (a.shape = a(6, 21216, 2) ) and I want to slice it using a[levels,nodes,:] where > levels=array([2,4]),nodes=array([12,1234,4566,1233]) etc. > > I can do > > b=a[:,nodes,:] > c=b[levels,:,:] > > but if I try > c=a[levels,nodes,:] > > ValueError: shape mismatch: objects cannot be broadcast to a single shape > > is there something I am doing wrong or is this just not possible? > What is a[levels, nodes, :] supposed to return? If you are expecting the cross-product, then I suspect what you want is il, in = numpy.ix_(levels, nodes) c = a[il, in,:] Indexing with a list in NumPy returns an "element-byelement" result so the two indexing lists have to have commensurate shapes. You get the cross-product by fiddling with the dimensions and turn levels into a (2,1)-array and nodes into a (1,4)-array. Then broadcasting handles createing the (2,4)-shape set of indices that you may have been expecting. That's all numpy.ix_ does is fiddle with the shapes to give you index arrays for getting the cross-product. -Travis From stefan at sun.ac.za Thu May 22 13:32:32 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 22 May 2008 19:32:32 +0200 Subject: [SciPy-user] extracting elements of a matrix using arrays as indices (SEGFAULT!) In-Reply-To: <48358C62.8040406@enthought.com> References: <1211320260.2760.2.camel@localhost.localdomain> <1d36917a0805210710t19d160dep6d67a39015fa9434@mail.gmail.com> <49B2E50A-1C2D-4424-8609-4984F0ED6C78@usgs.gov> <4835148D.9000807@enthought.com> <9457e7c80805212358p22cf8f51o28a807d4b0dff4f1@mail.gmail.com> <48358C62.8040406@enthought.com> Message-ID: <9457e7c80805221032p1294a54cv4592ef1a93a0a7fb@mail.gmail.com> 2008/5/22 Travis E. Oliphant : >> And many bonus points to you for fixing the bug, as well as adding a >> regression test. >> > I did that just for you even if it was a bleary-eyed 1:30 am ;-) One thing I learnt from The Godfather is that you should never under-estimate the value of an indebted friend! :) Now, I hope everybody follows your sterling example! By the way, I've started uploading my own patches for peer review, and discovered that I've been neglecting whitespace in contravention of article, er, I mean PEP 08. So I learn (just wish someone had told me earlier)! Thanks again, St?fan From shao at msg.ucsf.edu Thu May 22 13:41:40 2008 From: shao at msg.ucsf.edu (Lin Shao) Date: Thu, 22 May 2008 10:41:40 -0700 Subject: [SciPy-user] how to build in fftw3 support? In-Reply-To: <3d375d730805211921x633bdef1k96e218bedacde740@mail.gmail.com> References: <3d375d730805211921x633bdef1k96e218bedacde740@mail.gmail.com> Message-ID: Thanks for the reply. I made sure that scipy/fftpack/_fftpack.so depends on libfftw3. Now if I from scipy import fftpack then I got fftpack (probably using fftw3). Here're some confusions I'd like to get clarified: 1. What's scipy's fft() function in relation to scipy.fftpack package? 2. What're numpy's fft package (from numpy import fft) and fftpack package (from numpy.fft import fftpack)? are they related to scipy.fftpack at all? Thanks. --lin On Wed, May 21, 2008 at 7:21 PM, Robert Kern wrote: > > On Wed, May 21, 2008 at 9:17 PM, Lin Shao wrote: > > Hello > > > > I have fftw3 installed and setup.py seems to have found it: > > > > fftw3_info: > > libraries fftw3 not found in /usr/local/lib > > libraries fftw3 not found in /opt/lib > > FOUND: > > libraries = ['fftw3'] > > library_dirs = ['/usr/lib'] > > define_macros = [('SCIPY_FFTW3_H', None)] > > include_dirs = ['/usr/include'] > > > > But I don't see fftw3 in my scipy build. > > There is no scipy.fftw3 package, if that is what you were looking for. > Everything is under scipy.fftpack. In particular, you can use the > program ldd to double-check that the file scipy/fftpack/_fftpack.so is > linked to libfftw3. > From Dharhas.Pothina at twdb.state.tx.us Thu May 22 13:50:48 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Thu, 22 May 2008 12:50:48 -0500 Subject: [SciPy-user] Slicing 3D array using two 1D arrays not working In-Reply-To: <4835A852.2000008@enthought.com> References: <48354EE90200009B00012D41@GWWEB.twdb.state.tx.us> <4835A852.2000008@enthought.com> Message-ID: <48356C28.63BA.009B.0@twdb.state.tx.us> I am trying to extract a subset of the original array. 'a' is a 3D array of (levels,nodes,variables). The values So lets say I have an array where a.shape = (6,21216,2) i.e 6 levels,21216 nodes and 2 data variables and I need only the data at 1st and 4th levels and at nodes 10,100 and 2400. I want to make a new array b where b.shape(2,3,2) b[0,0,:] = data variables at 1st level and node 10 b[0,1,:] = data variables at 1st level and node 100 b[0,1,:] = data variables at 1st level and node 2400 b[1,0,:] = data variables at 4th level and node 10 b[1,1,:] = data variables at 4th level and node 100 b[1,1,:] = data variables at 4th level and node 2400 I'm reading through your description of ix_ to see if I understand whats happening thanks, - dharhas >>> "Travis E. Oliphant" 5/22/2008 12:07 PM >>> Dharhas Pothina wrote: > Hi, > > I have an array a 3D array 'a' (a.shape = a(6, 21216, 2) ) and I want to slice it using a[levels,nodes,:] where > levels=array([2,4]),nodes=array([12,1234,4566,1233]) etc. > > I can do > > b=a[:,nodes,:] > c=b[levels,:,:] > > but if I try > c=a[levels,nodes,:] > > ValueError: shape mismatch: objects cannot be broadcast to a single shape > > is there something I am doing wrong or is this just not possible? > What is a[levels, nodes, :] supposed to return? If you are expecting the cross-product, then I suspect what you want is il, in = numpy.ix_(levels, nodes) c = a[il, in,:] Indexing with a list in NumPy returns an "element-byelement" result so the two indexing lists have to have commensurate shapes. You get the cross-product by fiddling with the dimensions and turn levels into a (2,1)-array and nodes into a (1,4)-array. Then broadcasting handles createing the (2,4)-shape set of indices that you may have been expecting. That's all numpy.ix_ does is fiddle with the shapes to give you index arrays for getting the cross-product. -Travis _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Thu May 22 14:31:52 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 May 2008 13:31:52 -0500 Subject: [SciPy-user] how to build in fftw3 support? In-Reply-To: References: <3d375d730805211921x633bdef1k96e218bedacde740@mail.gmail.com> Message-ID: <3d375d730805221131k4e5d5d68x83b95bffcae14c54@mail.gmail.com> On Thu, May 22, 2008 at 12:41 PM, Lin Shao wrote: > Thanks for the reply. I made sure that scipy/fftpack/_fftpack.so > depends on libfftw3. Now if I > from scipy import fftpack > then I got fftpack (probably using fftw3). > > Here're some confusions I'd like to get clarified: > 1. What's scipy's fft() function in relation to scipy.fftpack package? It's an alias to numpy.fft(). scipy/__init__.py does from numpy import * in order to expose the numpy functions with a few overrides. > 2. What're numpy's fft package (from numpy import fft) and fftpack > package (from numpy.fft import fftpack)? They are standard wrappers of the FORTRAN-converted-to-C FFTPACK library. They are meant to provide relatively unoptimized FFT functionality everywhere without much build hassle. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From shao at msg.ucsf.edu Thu May 22 14:52:51 2008 From: shao at msg.ucsf.edu (Lin Shao) Date: Thu, 22 May 2008 11:52:51 -0700 Subject: [SciPy-user] how to build in fftw3 support? In-Reply-To: <3d375d730805221131k4e5d5d68x83b95bffcae14c54@mail.gmail.com> References: <3d375d730805211921x633bdef1k96e218bedacde740@mail.gmail.com> <3d375d730805221131k4e5d5d68x83b95bffcae14c54@mail.gmail.com> Message-ID: One more question. In scipy.fftpack, where isn't there a rfftn() function (just as numpy.fft.fftpack does)? Thanks! On Thu, May 22, 2008 at 11:31 AM, Robert Kern wrote: > On Thu, May 22, 2008 at 12:41 PM, Lin Shao wrote: >> Thanks for the reply. I made sure that scipy/fftpack/_fftpack.so >> depends on libfftw3. Now if I >> from scipy import fftpack >> then I got fftpack (probably using fftw3). >> >> Here're some confusions I'd like to get clarified: >> 1. What's scipy's fft() function in relation to scipy.fftpack package? > > It's an alias to numpy.fft(). scipy/__init__.py does > > from numpy import * > > in order to expose the numpy functions with a few overrides. > >> 2. What're numpy's fft package (from numpy import fft) and fftpack >> package (from numpy.fft import fftpack)? > > They are standard wrappers of the FORTRAN-converted-to-C FFTPACK > library. They are meant to provide relatively unoptimized FFT > functionality everywhere without much build hassle. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Thu May 22 15:06:43 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 May 2008 14:06:43 -0500 Subject: [SciPy-user] how to build in fftw3 support? In-Reply-To: References: <3d375d730805211921x633bdef1k96e218bedacde740@mail.gmail.com> <3d375d730805221131k4e5d5d68x83b95bffcae14c54@mail.gmail.com> Message-ID: <3d375d730805221206s6df295caqd571009a9b8ef25@mail.gmail.com> On Thu, May 22, 2008 at 1:52 PM, Lin Shao wrote: > One more question. In scipy.fftpack, where isn't there a rfftn() > function (just as numpy.fft.fftpack does)? I presume because of the lack of support from the non-FFTPACK backends. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From johannes.stromberg at gmail.com Thu May 22 15:42:01 2008 From: johannes.stromberg at gmail.com (=?ISO-8859-1?Q?Johannes_Str=F6mberg?=) Date: Thu, 22 May 2008 21:42:01 +0200 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: References: <9457e7c80805211404t2ea8b535lbc3569d6193a735f@mail.gmail.com> <9457e7c80805211442lf01a230i8638f807235cf3b9@mail.gmail.com> <3d375d730805211601we78ab47tf5396cfcafda2a93@mail.gmail.com> Message-ID: Thank you everyone, Unfortunately I was wrong about the calculations I actually need to perform. I have got a working set of operations for what I need (see below), but it is awfully slow. Anybody got any idea on how to make it faster? # in = array # gaussian = array # threshold = int # percent = int diff = in - gaussian diff = numpy.where(diff > threshold,diff,0) out = imdata + diff*(float(percent)/100) /Johannes From s.mientki at ru.nl Thu May 22 16:29:18 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Thu, 22 May 2008 22:29:18 +0200 Subject: [SciPy-user] Mean / std type conversion ? Message-ID: <4835D79E.3000106@ru.nl> hello, The following lines 1,2 don't work as I expect, lines 3,4 works correctly... print numpy.mean ( data, 0, dtype = int ) print numpy.std ( data, 0, int ) print numpy.mean ( data, 0 ).astype ( int ) print numpy.std ( data, 0 ).astype ( int ) Am I doing something wrong (numpy 1.0.4) ? Another question, isn't there a function that computes both mean and std ? thanks, Stef Mientki From robert.kern at gmail.com Thu May 22 16:37:22 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 May 2008 15:37:22 -0500 Subject: [SciPy-user] Mean / std type conversion ? In-Reply-To: <4835D79E.3000106@ru.nl> References: <4835D79E.3000106@ru.nl> Message-ID: <3d375d730805221337i4adead27i518a6cc92fb90568@mail.gmail.com> On Thu, May 22, 2008 at 3:29 PM, Stef Mientki wrote: > hello, > > The following lines 1,2 don't work as I expect, > lines 3,4 works correctly... > > print numpy.mean ( data, 0, dtype = int ) > print numpy.std ( data, 0, int ) > print numpy.mean ( data, 0 ).astype ( int ) > print numpy.std ( data, 0 ).astype ( int ) > > Am I doing something wrong (numpy 1.0.4) ? Yes. 1,2 mean something entirely different than 3,4. They do not specify the type of the output but rather the type of the accumulator. This allows one to specify an accumulator type larger than the array's type to avoid overflow. > Another question, > isn't there a function that computes both mean and std ? No. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Thu May 22 17:26:52 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 22 May 2008 23:26:52 +0200 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: References: <9457e7c80805211404t2ea8b535lbc3569d6193a735f@mail.gmail.com> <9457e7c80805211442lf01a230i8638f807235cf3b9@mail.gmail.com> <3d375d730805211601we78ab47tf5396cfcafda2a93@mail.gmail.com> Message-ID: <9457e7c80805221426x787a1cceu20d69cf339eff1cb@mail.gmail.com> 2008/5/22 Johannes Str?mberg : > Thank you everyone, > > Unfortunately I was wrong about the calculations I actually need to > perform. I have got a working set of operations for what I need (see > below), but it is awfully slow. Anybody got any idea on how to make it > faster? > > # in = array > # gaussian = array > # threshold = int > # percent = int > > diff = in - gaussian > diff = numpy.where(diff > threshold,diff,0) > > out = imdata + diff*(float(percent)/100) You can save a couple of temporaries by doing diff /= percent / 100. diff += imdata return diff Cheers St?fan From stef.mientki at gmail.com Thu May 22 17:36:56 2008 From: stef.mientki at gmail.com (Stef Mientki) Date: Thu, 22 May 2008 23:36:56 +0200 Subject: [SciPy-user] Mean / std type conversion ? In-Reply-To: <3d375d730805221337i4adead27i518a6cc92fb90568@mail.gmail.com> References: <4835D79E.3000106@ru.nl> <3d375d730805221337i4adead27i518a6cc92fb90568@mail.gmail.com> Message-ID: <4835E778.7030806@gmail.com> Robert Kern wrote: > On Thu, May 22, 2008 at 3:29 PM, Stef Mientki wrote: > >> hello, >> >> The following lines 1,2 don't work as I expect, >> lines 3,4 works correctly... >> >> print numpy.mean ( data, 0, dtype = int ) >> print numpy.std ( data, 0, int ) >> print numpy.mean ( data, 0 ).astype ( int ) >> print numpy.std ( data, 0 ).astype ( int ) >> >> Am I doing something wrong (numpy 1.0.4) ? >> > > Yes. 1,2 mean something entirely different than 3,4. They do not > specify the type of the output but rather the type of the accumulator. > This allows one to specify an accumulator type larger than the array's > type to avoid overflow. > > >> Another question, >> isn't there a function that computes both mean and std ? >> > > No. > > thanks Robert, that explains. cheers, Stef From shao at msg.ucsf.edu Thu May 22 18:58:59 2008 From: shao at msg.ucsf.edu (Lin Shao) Date: Thu, 22 May 2008 15:58:59 -0700 Subject: [SciPy-user] how to build in fftw3 support? In-Reply-To: <3d375d730805221206s6df295caqd571009a9b8ef25@mail.gmail.com> References: <3d375d730805211921x633bdef1k96e218bedacde740@mail.gmail.com> <3d375d730805221131k4e5d5d68x83b95bffcae14c54@mail.gmail.com> <3d375d730805221206s6df295caqd571009a9b8ef25@mail.gmail.com> Message-ID: On Thu, May 22, 2008 at 12:06 PM, Robert Kern wrote: > On Thu, May 22, 2008 at 1:52 PM, Lin Shao wrote: >> One more question. In scipy.fftpack, where isn't there a rfftn() >> function (just as numpy.fft.fftpack does)? > > I presume because of the lack of support from the non-FFTPACK backends. > Did you mean "lack of support of real-to-complex FFT in fftw3"? I'm pretty sure fftw3 has that support (see http://www.fftw.org/fftw3_doc/Multi_002dDimensional-DFTs-of-Real-Data.html#Multi_002dDimensional-DFTs-of-Real-Data ) Another question: Does the fftw3 wrap in scipy.fftpack support multithread (and therefore multiple processor cores) by default? Thanks. -lin From robert.kern at gmail.com Thu May 22 19:02:21 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 May 2008 18:02:21 -0500 Subject: [SciPy-user] how to build in fftw3 support? In-Reply-To: References: <3d375d730805211921x633bdef1k96e218bedacde740@mail.gmail.com> <3d375d730805221131k4e5d5d68x83b95bffcae14c54@mail.gmail.com> <3d375d730805221206s6df295caqd571009a9b8ef25@mail.gmail.com> Message-ID: <3d375d730805221602x5137034fj576b11036056a589@mail.gmail.com> On Thu, May 22, 2008 at 5:58 PM, Lin Shao wrote: > On Thu, May 22, 2008 at 12:06 PM, Robert Kern wrote: >> On Thu, May 22, 2008 at 1:52 PM, Lin Shao wrote: >>> One more question. In scipy.fftpack, where isn't there a rfftn() >>> function (just as numpy.fft.fftpack does)? >> >> I presume because of the lack of support from the non-FFTPACK backends. > > Did you mean "lack of support of real-to-complex FFT in fftw3"? No, I meant *all* of the backends. But I'm just guessing; I didn't write the code. It could just be oversight or lack of contributions. > Another question: Does the fftw3 wrap in scipy.fftpack support > multithread (and therefore multiple processor cores) by default? Probably not. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Thu May 22 23:32:00 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 23 May 2008 12:32:00 +0900 Subject: [SciPy-user] how to build in fftw3 support? In-Reply-To: <3d375d730805221206s6df295caqd571009a9b8ef25@mail.gmail.com> References: <3d375d730805211921x633bdef1k96e218bedacde740@mail.gmail.com> <3d375d730805221131k4e5d5d68x83b95bffcae14c54@mail.gmail.com> <3d375d730805221206s6df295caqd571009a9b8ef25@mail.gmail.com> Message-ID: <48363AB0.1030702@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > On Thu, May 22, 2008 at 1:52 PM, Lin Shao wrote: > >> One more question. In scipy.fftpack, where isn't there a rfftn() >> function (just as numpy.fft.fftpack does)? >> > > I presume because of the lack of support from the non-FFTPACK backends. > Actually, in the current state of affairs, scipy.fftpack provides the same functionalities for all backends. rfftn is simply not available at all in scipy.fftpack, whatever backend is used. cheers, David From robert.kern at gmail.com Thu May 22 23:47:24 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 May 2008 22:47:24 -0500 Subject: [SciPy-user] how to build in fftw3 support? In-Reply-To: <48363AB0.1030702@ar.media.kyoto-u.ac.jp> References: <3d375d730805211921x633bdef1k96e218bedacde740@mail.gmail.com> <3d375d730805221131k4e5d5d68x83b95bffcae14c54@mail.gmail.com> <3d375d730805221206s6df295caqd571009a9b8ef25@mail.gmail.com> <48363AB0.1030702@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730805222047k17ebdd0atf9ce4f618e81655d@mail.gmail.com> On Thu, May 22, 2008 at 10:32 PM, David Cournapeau wrote: > Robert Kern wrote: >> On Thu, May 22, 2008 at 1:52 PM, Lin Shao wrote: >> >>> One more question. In scipy.fftpack, where isn't there a rfftn() >>> function (just as numpy.fft.fftpack does)? >> >> I presume because of the lack of support from the non-FFTPACK backends. > > Actually, in the current state of affairs, scipy.fftpack provides the > same functionalities for all backends. rfftn is simply not available at > all in scipy.fftpack, whatever backend is used. Yes, I know. What I meant was that perhaps since it wasn't part of the subset of functionality common to all of the backends, that it would be omitted from the interface entirely. Now, however, I just suspect that numpy.fft just got a little more development and no one though to implement it over in scipy.fftpack. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Fri May 23 00:04:15 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 23 May 2008 13:04:15 +0900 Subject: [SciPy-user] how to build in fftw3 support? In-Reply-To: <3d375d730805222047k17ebdd0atf9ce4f618e81655d@mail.gmail.com> References: <3d375d730805211921x633bdef1k96e218bedacde740@mail.gmail.com> <3d375d730805221131k4e5d5d68x83b95bffcae14c54@mail.gmail.com> <3d375d730805221206s6df295caqd571009a9b8ef25@mail.gmail.com> <48363AB0.1030702@ar.media.kyoto-u.ac.jp> <3d375d730805222047k17ebdd0atf9ce4f618e81655d@mail.gmail.com> Message-ID: <4836423F.1090907@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > Yes, I know. What I meant was that perhaps since it wasn't part of the > subset of functionality common to all of the backends, that it would > be omitted from the interface entirely. > Ok, sorry for the misunderstanding. > Now, however, I just suspect that numpy.fft just got a little more > development and no one though to implement it over in scipy.fftpack. > That's the most plausible explanation, I guess, since some functions in scipy.fftpack.drfft are only implemented by fftpack (real fft, for example). cheers, David From prabhu at aero.iitb.ac.in Fri May 23 01:24:28 2008 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Fri, 23 May 2008 10:54:28 +0530 Subject: [SciPy-user] [ANN] Mayavi sprint in July 2008 Message-ID: <4836550C.3050204@aero.iitb.ac.in> Hi, This is to announce a Mayavi sprint between 2nd July to 9th July, 2008. The sprint will be held at the Enthought Office, Austin Texas. Here are the details: Dates: 2nd July 2008 to 9th July 2008 Location: Enthought Office at Austin, TX Please do join us -- even if it is only for a few days. Both Ga?l Varoquaux and myself will be at the sprint on all days and there will be developers from Enthought joining us as well. Enthought is graciously hosting the sprint at their office. The agenda for the sprint is yet to be decided. Please contact me off-list if you plan on attending. Thanks! About Mayavi ------------ Mayavi seeks to provide easy and interactive visualization of 3D data. It is distributed under the terms of the new BSD license. It is built atop the Enthought Tool Suite and VTK. It provides an optional rich UI and a clean Pythonic API with native support for numpy arrays. Mayavi strives to be a reusable tool that can be embedded in your applications in different ways or combined with the envisage application-building framework to assemble domain-specific tools. For more information see here: http://code.enthought.com/projects/mayavi/ cheers, -- Prabhu Ramachandran http://www.aero.iitb.ac.in/~prabhu From lorenzo.isella at gmail.com Fri May 23 08:57:52 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Fri, 23 May 2008 14:57:52 +0200 Subject: [SciPy-user] SciPy integrate.odeint with interpolation Message-ID: Dear All, I have often used integrate.odeint to integrate ODE's. Now, consider the case y'(t)=f, where f is not an analytical function but rather a (discrete) set of experimental values. I wonder if it possible to do something along these lines: (1) define a function g(t) which interpolates (maybe with a spline) the set of experimental measurements {f(t_i)}, i=1,2,...N. (2) Re-define the problem as y'(t)=g(t) Do you think that this approach is correct? Are there any pitfalls I should be aware of? Cheers Lorenzo From peridot.faceted at gmail.com Fri May 23 11:03:11 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 23 May 2008 11:03:11 -0400 Subject: [SciPy-user] SciPy integrate.odeint with interpolation In-Reply-To: References: Message-ID: 2008/5/23 Lorenzo Isella : > Dear All, > I have often used integrate.odeint to integrate ODE's. > Now, consider the case y'(t)=f, where f is not an analytical function > but rather a (discrete) set of experimental values. > I wonder if it possible to do something along these lines: > (1) define a function g(t) which interpolates (maybe with a spline) > the set of experimental measurements {f(t_i)}, i=1,2,...N. > (2) Re-define the problem as y'(t)=g(t) > > Do you think that this approach is correct? Are there any pitfalls I > should be aware of? For the particular problem you describe - y'(t)=g(t), what you want is actually the antiderivative of g(t). For this particular case, if you use splrep/splev to produce g(t) interpolating f(t), you can use splint to obtain definite integrals of the resulting function. (Be aware that splrep and friends can use "smoothing" to fit experimental data with errors, and make sure you don't use smoothing if it's not what you want.) More generally, you would have y'(t,y) = f(t_i,y_i); I'm not totally sure what the best way is to handle this, but using an ODE integrator on an interpolated right-hand side seems reasonable to me. It'll have to be a bivariate spline, which puts some constraints on the sampled points. You will, of course, not be able to trust the result to the same degree of accuracy you could normally expect from an ODE integrator, and in fact it may be a real challenge to tell whether the spline is doing a good job interpolating your data. Anne From johannes.stromberg at gmail.com Fri May 23 12:07:57 2008 From: johannes.stromberg at gmail.com (=?ISO-8859-1?Q?Johannes_Str=F6mberg?=) Date: Fri, 23 May 2008 18:07:57 +0200 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: <9457e7c80805221426x787a1cceu20d69cf339eff1cb@mail.gmail.com> References: <9457e7c80805211442lf01a230i8638f807235cf3b9@mail.gmail.com> <3d375d730805211601we78ab47tf5396cfcafda2a93@mail.gmail.com> <9457e7c80805221426x787a1cceu20d69cf339eff1cb@mail.gmail.com> Message-ID: Thanks, unfortunately that is only a 5% improvement. /Johannes > > You can save a couple of temporaries by doing > > diff /= percent / 100. > diff += imdata > return diff > > Cheers > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From shao at msg.ucsf.edu Fri May 23 13:21:35 2008 From: shao at msg.ucsf.edu (Lin Shao) Date: Fri, 23 May 2008 10:21:35 -0700 Subject: [SciPy-user] how to build in fftw3 support? In-Reply-To: <4836423F.1090907@ar.media.kyoto-u.ac.jp> References: <3d375d730805211921x633bdef1k96e218bedacde740@mail.gmail.com> <3d375d730805221131k4e5d5d68x83b95bffcae14c54@mail.gmail.com> <3d375d730805221206s6df295caqd571009a9b8ef25@mail.gmail.com> <48363AB0.1030702@ar.media.kyoto-u.ac.jp> <3d375d730805222047k17ebdd0atf9ce4f618e81655d@mail.gmail.com> <4836423F.1090907@ar.media.kyoto-u.ac.jp> Message-ID: These all sound a little strange to me, because real-to-complex FFT is such a common task that it's inconceivable that any backend should omit it. If some backends indeed don't support it, I think it's fair to not consider it as a backend candidate for scipy.fftpack. So far there're only us outsiders guessing what's going on; it'd be nice if the insiders can speak out. -lin On Thu, May 22, 2008 at 9:04 PM, David Cournapeau wrote: > Robert Kern wrote: >> >> Yes, I know. What I meant was that perhaps since it wasn't part of the >> subset of functionality common to all of the backends, that it would >> be omitted from the interface entirely. >> > > Ok, sorry for the misunderstanding. >> Now, however, I just suspect that numpy.fft just got a little more >> development and no one though to implement it over in scipy.fftpack. >> > From robert.kern at gmail.com Fri May 23 14:53:24 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 23 May 2008 13:53:24 -0500 Subject: [SciPy-user] how to build in fftw3 support? In-Reply-To: References: <3d375d730805211921x633bdef1k96e218bedacde740@mail.gmail.com> <3d375d730805221131k4e5d5d68x83b95bffcae14c54@mail.gmail.com> <3d375d730805221206s6df295caqd571009a9b8ef25@mail.gmail.com> <48363AB0.1030702@ar.media.kyoto-u.ac.jp> <3d375d730805222047k17ebdd0atf9ce4f618e81655d@mail.gmail.com> <4836423F.1090907@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730805231153g6dc8b571l5997c1bab3fd9f0b@mail.gmail.com> On Fri, May 23, 2008 at 12:21 PM, Lin Shao wrote: > These all sound a little strange to me, because real-to-complex FFT is > such a common task that it's inconceivable that any backend should > omit it. Note that rfftn() is N-dimensional. scipy.fftpack.rfft() does exist. > If some backends indeed don't support it, I think it's fair > to not consider it as a backend candidate for scipy.fftpack. > > So far there're only us outsiders guessing what's going on; it'd be > nice if the insiders can speak out. Neither David nor I are outsiders by any stretch of the imagination. However, we were not the ones who did the implementation of scipy.fftpack years ago. In any case, if you want it, go ahead and implement it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From shao at msg.ucsf.edu Fri May 23 16:50:11 2008 From: shao at msg.ucsf.edu (Lin Shao) Date: Fri, 23 May 2008 13:50:11 -0700 Subject: [SciPy-user] how to build in fftw3 support? In-Reply-To: <3d375d730805231153g6dc8b571l5997c1bab3fd9f0b@mail.gmail.com> References: <3d375d730805221131k4e5d5d68x83b95bffcae14c54@mail.gmail.com> <3d375d730805221206s6df295caqd571009a9b8ef25@mail.gmail.com> <48363AB0.1030702@ar.media.kyoto-u.ac.jp> <3d375d730805222047k17ebdd0atf9ce4f618e81655d@mail.gmail.com> <4836423F.1090907@ar.media.kyoto-u.ac.jp> <3d375d730805231153g6dc8b571l5997c1bab3fd9f0b@mail.gmail.com> Message-ID: > > Note that rfftn() is N-dimensional. scipy.fftpack.rfft() does exist. Sure, I did notice that, but for 1-d FFT who cares that much about which package to use. > >> If some backends indeed don't support it, I think it's fair >> to not consider it as a backend candidate for scipy.fftpack. >> >> So far there're only us outsiders guessing what's going on; it'd be >> nice if the insiders can speak out. > > Neither David nor I are outsiders by any stretch of the imagination. > However, we were not the ones who did the implementation of > scipy.fftpack years ago. I'm sorry, but I meant "outsiders to fftpack", not to scipy in general. > > In any case, if you want it, go ahead and implement it. > I should. From peridot.faceted at gmail.com Fri May 23 22:06:49 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 23 May 2008 22:06:49 -0400 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: References: <9457e7c80805211404t2ea8b535lbc3569d6193a735f@mail.gmail.com> <9457e7c80805211442lf01a230i8638f807235cf3b9@mail.gmail.com> <3d375d730805211601we78ab47tf5396cfcafda2a93@mail.gmail.com> Message-ID: 2008/5/22 Johannes Str?mberg : > Thank you everyone, > > Unfortunately I was wrong about the calculations I actually need to > perform. I have got a working set of operations for what I need (see > below), but it is awfully slow. Anybody got any idea on how to make it > faster? > > # in = array > # gaussian = array > # threshold = int > # percent = int > > diff = in - gaussian > diff = numpy.where(diff > threshold,diff,0) It might help a little to write: diff[diff>threshold] = 0 Otherwise there's not too much fat in there. Anne From robert.kern at gmail.com Fri May 23 23:39:23 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 23 May 2008 22:39:23 -0500 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: References: <9457e7c80805211442lf01a230i8638f807235cf3b9@mail.gmail.com> <3d375d730805211601we78ab47tf5396cfcafda2a93@mail.gmail.com> Message-ID: <3d375d730805232039y5725a06ah1bdd2cb58d03c1d6@mail.gmail.com> On Fri, May 23, 2008 at 9:06 PM, Anne Archibald wrote: > 2008/5/22 Johannes Str?mberg : >> Thank you everyone, >> >> Unfortunately I was wrong about the calculations I actually need to >> perform. I have got a working set of operations for what I need (see >> below), but it is awfully slow. Anybody got any idea on how to make it >> faster? >> >> # in = array >> # gaussian = array >> # threshold = int >> # percent = int >> >> diff = in - gaussian >> diff = numpy.where(diff > threshold,diff,0) > > It might help a little to write: > diff[diff>threshold] = 0 > > Otherwise there's not too much fat in there. As I think I've mentioned in another thread, these masked operations are inherently slow. We use iterators to do them, and this interferes with the ability of the compiler to use optimized instructions. It is exactly this operation that was the bottleneck in one of our programs, so we wrote a C function using SSE2 instructions to do this wicked fast. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Fri May 23 23:45:12 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 24 May 2008 12:45:12 +0900 Subject: [SciPy-user] how to build in fftw3 support? In-Reply-To: References: <3d375d730805221131k4e5d5d68x83b95bffcae14c54@mail.gmail.com> <3d375d730805221206s6df295caqd571009a9b8ef25@mail.gmail.com> <48363AB0.1030702@ar.media.kyoto-u.ac.jp> <3d375d730805222047k17ebdd0atf9ce4f618e81655d@mail.gmail.com> <4836423F.1090907@ar.media.kyoto-u.ac.jp> <3d375d730805231153g6dc8b571l5997c1bab3fd9f0b@mail.gmail.com> Message-ID: <48378F48.6080706@ar.media.kyoto-u.ac.jp> Lin Shao wrote: > > Sure, I did notice that, but for 1-d FFT who cares that much about > which package to use. I would say that many people care, since that's the transforms that people contributing new backends did implement. > > I'm sorry, but I meant "outsiders to fftpack", not to scipy in general. Although I did not write the original code (Pearu Peterson and David M Cooke did), I did rewrote some backends, and I am in the middle of heavily refactoring it, so I would say I know it fairly well. Here is the story as I understand it: a few years ago, the initial scipy.fftpack was written, with fftpack as a backend, as the name suggested. Some functions could also use other backends (fftw, djbfft). As people contributed some functions, the code became unmaintainable. I wanted to speed-up the fftw3 backend, but this was too difficult with the code at that time (the code for all backends was mixed up), so I refactored it one first time. Now, I am refactoring it even more: the goal is to have totally separate implementation for each backend. That should make contribution easier, also. cheers, David From haase at msg.ucsf.edu Sat May 24 03:13:33 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sat, 24 May 2008 09:13:33 +0200 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: <3d375d730805232039y5725a06ah1bdd2cb58d03c1d6@mail.gmail.com> References: <9457e7c80805211442lf01a230i8638f807235cf3b9@mail.gmail.com> <3d375d730805211601we78ab47tf5396cfcafda2a93@mail.gmail.com> <3d375d730805232039y5725a06ah1bdd2cb58d03c1d6@mail.gmail.com> Message-ID: On Sat, May 24, 2008 at 5:39 AM, Robert Kern wrote: > On Fri, May 23, 2008 at 9:06 PM, Anne Archibald > wrote: >> 2008/5/22 Johannes Str?mberg : >>> Thank you everyone, >>> >>> Unfortunately I was wrong about the calculations I actually need to >>> perform. I have got a working set of operations for what I need (see >>> below), but it is awfully slow. Anybody got any idea on how to make it >>> faster? >>> >>> # in = array >>> # gaussian = array >>> # threshold = int >>> # percent = int >>> >>> diff = in - gaussian >>> diff = numpy.where(diff > threshold,diff,0) >> >> It might help a little to write: >> diff[diff>threshold] = 0 >> >> Otherwise there's not too much fat in there. > > As I think I've mentioned in another thread, these masked operations > are inherently slow. We use iterators to do them, and this interferes > with the ability of the compiler to use optimized instructions. It is > exactly this operation that was the bottleneck in one of our programs, > so we wrote a C function using SSE2 instructions to do this wicked > fast. Robert, would it be possible, that you could post this "C function using SSE2 instructions" !? I'm very curious to see some SSE2. Are you saying the compiler is not optimizing normal code to use those instructions ? Lastly, I assume you interface to C using ctypes, right? Thanks, Sebastian Haase From david at ar.media.kyoto-u.ac.jp Sat May 24 03:12:24 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 24 May 2008 16:12:24 +0900 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: References: <9457e7c80805211442lf01a230i8638f807235cf3b9@mail.gmail.com> <3d375d730805211601we78ab47tf5396cfcafda2a93@mail.gmail.com> <3d375d730805232039y5725a06ah1bdd2cb58d03c1d6@mail.gmail.com> Message-ID: <4837BFD8.7040701@ar.media.kyoto-u.ac.jp> Sebastian Haase wrote: > Robert, > would it be possible, that you could post this > "C function using SSE2 instructions" !? > I'm very curious to see some SSE2. Are you saying the compiler is not > optimizing normal code to use those instructions ? > If iterators are used, I don't think it is even possible to use SSE: the pointer can be anywhere, so there is just no way for the compiler to use SIMD instructions. Only JIT could: that's the kind of things which could make python (or other dynamic languages) faster than C (in a really far future). cheers, David From robert.kern at gmail.com Sat May 24 05:24:29 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 24 May 2008 04:24:29 -0500 Subject: [SciPy-user] PIL and gaussian_filter? In-Reply-To: References: <3d375d730805211601we78ab47tf5396cfcafda2a93@mail.gmail.com> <3d375d730805232039y5725a06ah1bdd2cb58d03c1d6@mail.gmail.com> Message-ID: <3d375d730805240224w520d5731p8632d60973f5e250@mail.gmail.com> On Sat, May 24, 2008 at 2:13 AM, Sebastian Haase wrote: > On Sat, May 24, 2008 at 5:39 AM, Robert Kern wrote: >> As I think I've mentioned in another thread, these masked operations >> are inherently slow. We use iterators to do them, and this interferes >> with the ability of the compiler to use optimized instructions. It is >> exactly this operation that was the bottleneck in one of our programs, >> so we wrote a C function using SSE2 instructions to do this wicked >> fast. > Robert, > would it be possible, that you could post this > "C function using SSE2 instructions" !? Maybe. Certainly not soon. This was proprietary code. > I'm very curious to see some SSE2. Are you saying the compiler is not > optimizing normal code to use those instructions ? Yup. David explains why. > Lastly, I assume you interface to C using ctypes, right? No, I wrote the module by hand. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From s.mientki at ru.nl Sat May 24 09:42:44 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 24 May 2008 15:42:44 +0200 Subject: [SciPy-user] Scipy + SQLite crashes ? Message-ID: <48381B54.1050506@ru.nl> hello, why does the following program crashes ? from scipy import * import sqlite3 Traceback (most recent call last): File "D:\Data_Python\P24_PyLab_Works\module1.py", line 2, in import sqlite3 File "P:\Python\lib\sqlite3\__init__.py", line 24, in from dbapi2 import * File "P:\Python\lib\sqlite3\dbapi2.py", line 27, in from _sqlite3 import * ImportError: DLL load failed: Invalid access to memory location. And more important, is there a solution ? I'm using Python 2.5.2 and scipy 0.6.0 on win-XP-sp2 thanks, Stef Mientki From matthew.brett at gmail.com Sat May 24 10:04:22 2008 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 24 May 2008 14:04:22 +0000 Subject: [SciPy-user] Scipy + SQLite crashes ? In-Reply-To: <48381B54.1050506@ru.nl> References: <48381B54.1050506@ru.nl> Message-ID: <1e2af89e0805240704veaade2bhc42075e794ad6d63@mail.gmail.com> Hi, > why does the following program crashes ? > from scipy import * > import sqlite3 What happens if you just do: import sqlite3 without the scipy import? If that works without error, what happens with: import scipy import sqlite3 ? Best, Matthew From stef.mientki at gmail.com Sat May 24 10:30:01 2008 From: stef.mientki at gmail.com (Stef Mientki) Date: Sat, 24 May 2008 16:30:01 +0200 Subject: [SciPy-user] Scipy + SQLite crashes ? In-Reply-To: <1e2af89e0805240704veaade2bhc42075e794ad6d63@mail.gmail.com> References: <48381B54.1050506@ru.nl> <1e2af89e0805240704veaade2bhc42075e794ad6d63@mail.gmail.com> Message-ID: <48382669.90801@gmail.com> thanks Matthew, you lead me to the solution ... Matthew Brett wrote: > Hi, > > >> why does the following program crashes ? >> from scipy import * >> import sqlite3 >> > > What happens if you just do: > > import sqlite3 > > without the scipy import? > works OK > If that works without error, what happens with: > > import scipy > import sqlite3 > works OK and this import sqlite3 from scipy import * also works OK, so that's the solution, I'll probably be able to implement that. cheers, Stef > ? > > Best, > > Matthew > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From shao at msg.ucsf.edu Sat May 24 13:34:15 2008 From: shao at msg.ucsf.edu (Lin Shao) Date: Sat, 24 May 2008 10:34:15 -0700 Subject: [SciPy-user] how to build in fftw3 support? In-Reply-To: <48378F48.6080706@ar.media.kyoto-u.ac.jp> References: <3d375d730805221206s6df295caqd571009a9b8ef25@mail.gmail.com> <48363AB0.1030702@ar.media.kyoto-u.ac.jp> <3d375d730805222047k17ebdd0atf9ce4f618e81655d@mail.gmail.com> <4836423F.1090907@ar.media.kyoto-u.ac.jp> <3d375d730805231153g6dc8b571l5997c1bab3fd9f0b@mail.gmail.com> <48378F48.6080706@ar.media.kyoto-u.ac.jp> Message-ID: >> >> Sure, I did notice that, but for 1-d FFT who cares that much about >> which package to use. > > I would say that many people care, since that's the transforms that > people contributing new backends did implement. Sorry, I wasn't clear again. What I meant was any backend should perform 1-d FFT just as well as FFTW3; I think the biggest reason to use FFTW3 (at least to me) is for its multi-D performance. I just found in fftpack/NOTES.txt the following lines: To do ===== basic.py - Optimize ``fftn()`` for real input. - Implement ``rfftn()`` and ``irfftn()``. ...... So it looks like rfftn() was planned already. > Here is the story as I understand it: a few years ago, the initial > scipy.fftpack was written, with fftpack as a backend, as the name > suggested. Some functions could also use other backends (fftw, djbfft). > As people contributed some functions, the code became unmaintainable. I > wanted to speed-up the fftw3 backend, but this was too difficult with > the code at that time (the code for all backends was mixed up), so I > refactored it one first time. > > Now, I am refactoring it even more: the goal is to have totally separate > implementation for each backend. That should make contribution easier, also. > Ah, now it sounds very clear. Thanks for giving out the details. By "refactoring", did you mean "dividing it up"? If so, I fully agree. I'm very interested in contributing something to wrapping FFTW3. Please let me know if someone is already working on it, or how my work can potentially be plugged into your framework. (Should this discussion be moved to scipy-dev?) --lin From junky at nc.rr.com Sat May 24 22:19:04 2008 From: junky at nc.rr.com (Mihail Sichitiu) Date: Sat, 24 May 2008 22:19:04 -0400 Subject: [SciPy-user] Help with installing scypy Message-ID: I tried to follow the instructions here: http://www.scipy.org/Installing_SciPy/Mac_OS_X#head- cfc918ee2c334f8cf364c08fff51e210e7913062 for installing numpy and scipy on a Mac OS X (10.4.11), but no luck. According to the advice there here are my software versions: Legolas:~/temp/scipy root# gcc --version gcc (GCC) 3.3 20030304 (Apple Computer, Inc. build 1819) Copyright (C) 2002 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Legolas:~/temp/scipy root# gfortran --version GNU Fortran (GCC) 4.2.1 Legolas:~/temp/scipy root# g77 --version GNU Fortran (GCC) 3.4.6 Legolas:~/temp/scipy root# Also, the message error when trying to make scipy (after building and installing numpy): Legolas:~/temp/scipy root# python setup.py build_src build_clib -- fcompiler=gnu build_ext --fcompiler=gnu build Traceback (most recent call last): File "setup.py", line 92, in ? setup_package() File "setup.py", line 63, in setup_package from numpy.distutils.core import setup File "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/ python2.3/site-packages/numpy/__init__.py", line 93, in ? import add_newdocs File "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/ python2.3/site-packages/numpy/add_newdocs.py", line 9, in ? from lib import add_newdoc File "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/ python2.3/site-packages/numpy/lib/__init__.py", line 18, in ? from io import * File "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/ python2.3/site-packages/numpy/lib/io.py", line 16, in ? from _compiled_base import packbits, unpackbits ImportError: cannot import name packbits Any ideas what's going wrong? Thanks, Mihai -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat May 24 22:32:32 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 24 May 2008 21:32:32 -0500 Subject: [SciPy-user] Help with installing scypy In-Reply-To: References: Message-ID: <3d375d730805241932i2c2cc9bau40e8a33c308c1985@mail.gmail.com> On Sat, May 24, 2008 at 9:19 PM, Mihail Sichitiu wrote: > I tried to follow the instructions here: > http://www.scipy.org/Installing_SciPy/Mac_OS_X#head-cfc918ee2c334f8cf364c08fff51e210e7913062 > for installing numpy and scipy on a Mac OS X (10.4.11), but no luck. > According to the advice there here are my software versions: > Legolas:~/temp/scipy root# gcc --version > gcc (GCC) 3.3 20030304 (Apple Computer, Inc. build 1819) > Copyright (C) 2002 Free Software Foundation, Inc. > This is free software; see the source for copying conditions. There is NO > warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. > Legolas:~/temp/scipy root# gfortran --version > GNU Fortran (GCC) 4.2.1 > Legolas:~/temp/scipy root# g77 --version > GNU Fortran (GCC) 3.4.6 > Legolas:~/temp/scipy root# > > Also, the message error when trying to make scipy (after building and > installing numpy): > Legolas:~/temp/scipy root# python setup.py build_src build_clib > --fcompiler=gnu build_ext --fcompiler=gnu build > Traceback (most recent call last): > File "setup.py", line 92, in ? > setup_package() > File "setup.py", line 63, in setup_package > from numpy.distutils.core import setup > File > "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/numpy/__init__.py", > line 93, in ? > import add_newdocs > File > "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/numpy/add_newdocs.py", > line 9, in ? > from lib import add_newdoc > File > "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/numpy/lib/__init__.py", > line 18, in ? > from io import * > File > "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages/numpy/lib/io.py", > line 16, in ? > from _compiled_base import packbits, unpackbits > ImportError: cannot import name packbits > Any ideas what's going wrong? Your numpy install is busted somehow. What version of numpy did you try to install? Did you have a previous version sitting there? Also, the instructions recommend (and I strongly concur) that you install the official binaries from python.org instead of using the system-installed Python. That's probably not your problem, but it's worth doing. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From c-b at asu.edu Sun May 25 02:00:25 2008 From: c-b at asu.edu (Christopher Brown) Date: Sat, 24 May 2008 23:00:25 -0700 Subject: [SciPy-user] playing numpy arrays on a soundcard Message-ID: <48390079.4090408@asu.edu> Hi List, I have been working on a python module that passes numpy arrays to a soundcard for playback, in a similar way that wavplay works for matlab. I have a preliminary version finished, and I would be very happy to get feedback. I would describe my C skills as beginner/amateur, so there may be issues that I don't even know to look for. But it seems to work pretty well for me. The module is a wrapper around the audiere sound library (audiere.sf.net), which already comes with python bindings, and I simply added several functions, features, and docstrings to expand its usefulness. I have tested it on both windows xp and kubuntu, and I haven't perceived any problems. I have created a deb binary package that is available here (depends on libaudiere-1.9.4): http://pal.asu.edu/packages/pyaudiere-0.1-py2.5-i386.deb and the source code is available here: http://pal.asu.edu/packages/pyaudiere-0.1.tar.gz Thanks! -- Chris From contact at pythonxy.com Mon May 26 16:25:34 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Mon, 26 May 2008 22:25:34 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 1.2.0 Message-ID: <483B1CBE.8080607@pythonxy.com> Hi all, Python(x,y) 1.2.0 is now available on http://www.pythonxy.com. Changes history 05-26-2008 - Version 1.2.0 : * Added: o All Users / Current User installation options o Installation directories (Eclipse, MinGW, Python, ...) customization o Everything can be installed in one directory only * Updated: o PyQt 4.4.2 (PyQwt is no longer included) o Qt Eclipse Integration 1.4.0 o Qt Eclipse Help 4.4.0 o SymPy 0.5.15 * Corrected: o IPython bug with PyQt4 (Matplotlib and Qt4 interactive consoles): warning messages ("QCoreApplication::exec: The event loop is already running") were displayed when "help()" was entered for example As for 1.1.x releases, installer patches will be available for future upgrades in order to update Python(x,y) without having to download the whole package each time. Regards, Pierre Raybaut From nicolas.pettiaux at ael.be Tue May 27 03:22:27 2008 From: nicolas.pettiaux at ael.be (Nicolas Pettiaux) Date: Tue, 27 May 2008 09:22:27 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 1.2.0 In-Reply-To: <483B1CBE.8080607@pythonxy.com> References: <483B1CBE.8080607@pythonxy.com> Message-ID: 2008/5/26 Pierre Raybaut : > Python(x,y) 1.2.0 is now available on http://www.pythonxy.com. thanks for the news. I thank you for this initiative to put all these tools together in a very easy way to install, and I really appreciate that you choose to propose tools that allow to develop applications that would be independant from the operating system but ... I wonder what would be needed to package these very same tools for the other main operating systems, aka Mac OS X and GNU/linux ? I am especially interested in such questions as I do only use personnally these (Mac OS X Tiger and GNU/linux Ubuntu 8.04). I am quite sure that other people would appreciate the availability of pythonxy on these platforms. THanks, Nicolas -- Nicolas Pettiaux April - ? promouvoir et d?fendre le logiciel libre ? - www.april.org Rejoignez maintenant pr?s de 2 000 personnes, associations, entreprises et collectivit?s qui soutiennent notre action From contact at pythonxy.com Tue May 27 04:34:41 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Tue, 27 May 2008 10:34:41 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 1.2.0 In-Reply-To: References: <483B1CBE.8080607@pythonxy.com> Message-ID: <629b08a40805270134s51aadd64l5872371838d1f747@mail.gmail.com> 2008/5/27 Nicolas Pettiaux : > 2008/5/26 Pierre Raybaut : > > > Python(x,y) 1.2.0 is now available on http://www.pythonxy.com. > > thanks for the news. > > I thank you for this initiative to put all these tools together in a > very easy way to install, and I really appreciate that you choose to > propose tools that allow to develop applications that would be > independant from the operating system but ... I wonder what would be > needed to package these very same tools for the other main operating > systems, aka Mac OS X and GNU/linux ? I am especially interested in > such questions as I do only use personnally these (Mac OS X Tiger and > GNU/linux Ubuntu 8.04). I am quite sure that other people would > appreciate the availability of pythonxy on these platforms. You're not the only one to ask about it indeed. A GNU/Linux version is scheduled (and it seems to be far more simple to package than the Windows version), but I do not personnally use this OS. One of my colleague should be working on it shortly. We will then decide how to organize this work in simple tasks, and probably ask for contributions. As of today, no MacOs version is scheduled (because we don't use this OS, and we don't know anyone who could do it). Thanks for your comments. Regards, Pierre Raybaut -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.pettiaux at ael.be Tue May 27 05:16:31 2008 From: nicolas.pettiaux at ael.be (Nicolas Pettiaux) Date: Tue, 27 May 2008 11:16:31 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 1.2.0 In-Reply-To: <629b08a40805270134s51aadd64l5872371838d1f747@mail.gmail.com> References: <483B1CBE.8080607@pythonxy.com> <629b08a40805270134s51aadd64l5872371838d1f747@mail.gmail.com> Message-ID: 2008/5/27 Pierre Raybaut : > You're not the only one to ask about it indeed. > A GNU/Linux version is scheduled (and it seems to be far more simple to > package than the Windows version), but I do not personnally use this OS. One > of my colleague should be working on it shortly. We will then decide how to > organize this work in simple tasks, and probably ask for contributions. OK I could try to help though I have very little technical competencies that could help > As of today, no MacOs version is scheduled (because we don't use this OS, > and we don't know anyone who could do it). I do use Mac OS X but when serious work comes, I do fall back to GNU/linux where, for me, the installation of packages is much simplier. Hopefully more technically advanced people will be able to help and contribute there, as I know that there are a number of python developpers using Mac OS X. > Thanks for your comments. With pleasure, Regards, Nicolas -- Nicolas Pettiaux April - ? promouvoir et d?fendre le logiciel libre ? - www.april.org Rejoignez maintenant pr?s de 2 000 personnes, associations, entreprises et collectivit?s qui soutiennent notre action From lorenzo.isella at gmail.com Tue May 27 05:58:00 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Tue, 27 May 2008 11:58:00 +0200 Subject: [SciPy-user] Splitting up and Joining Sets Message-ID: Dear All, I do not know for sure whether a SciPy array is the right tool for what I have in mind, but I would like to post this here anyhow. Say that you have a set of elements (plain integers): {1,2,3,4,5,6,7,8,9,10}, which can end up split into several subsets [no element is repeated from a subset to another, I always have 10 elements as a whole]. Ordering does not matter, so {3,4,5} is the same as {5,3,4}. and {{1,2},{7,3}} is the same as {{7,3},{2,1}}. Now, this is what I would like to do: (1) Given e.g. the subsets {1,4,10},{3,7,6},{5,8,2} {9} I would like to be able to find out which sets were merged/split in a new configuration, e.g. {1,4,10,9} ,{3,7,6},{5,8,2} (joining of two sets) or {1,4,10},{3,7,6},{5,8} ,{9}, {2} (splitting of two sets). (2) There then may be a more subtle case, in which the total number of sets stays the same, but the structure of the sets changes e.g.: {1,4,10},{3,7,6},{5,8,2} {9} ---> {1,4,10},{3,7,6},{5,8} {9,2}, which I also would like to be able to detect, finding out which elements left which subset to end up in another one. (3) Even if a scipy array was not the most suitable structure to use for these manipulations, that is the way the data are originally manipulated in my code, so please let me know how I should convert them into another data type (if needed at all). Many thanks Lorenzo From lbolla at gmail.com Tue May 27 07:30:20 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Tue, 27 May 2008 13:30:20 +0200 Subject: [SciPy-user] Splitting up and Joining Sets In-Reply-To: References: Message-ID: <80c99e790805270430l50c61562u8defddf249e89a67@mail.gmail.com> maybe python sets are the right data structure to use: they ensure uniqueness and order does not matter. they can be built easily from numpy.array: In [1]: import numpy In [2]: x = set(numpy.arange(10)) In [3]: x Out[3]: set([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) hth, L. On Tue, May 27, 2008 at 11:58 AM, Lorenzo Isella wrote: > Dear All, > I do not know for sure whether a SciPy array is the right tool for > what I have in mind, but I would like to post this here anyhow. > Say that you have a set of elements (plain integers): > {1,2,3,4,5,6,7,8,9,10}, which can end up split into several subsets > [no element is repeated from a subset to another, I always have 10 > elements as a whole]. Ordering does not matter, so {3,4,5} is the same > as {5,3,4}. and {{1,2},{7,3}} is the same as {{7,3},{2,1}}. > Now, this is what I would like to do: > (1) Given e.g. the subsets {1,4,10},{3,7,6},{5,8,2} {9} I would like > to be able to find out which sets were merged/split in a new > configuration, e.g. {1,4,10,9} ,{3,7,6},{5,8,2} (joining of two sets) > or {1,4,10},{3,7,6},{5,8} ,{9}, {2} (splitting of two sets). > (2) There then may be a more subtle case, in which the total number of > sets stays the same, but the structure of the sets changes e.g.: > {1,4,10},{3,7,6},{5,8,2} {9} ---> {1,4,10},{3,7,6},{5,8} {9,2}, which > I also would like to be able to detect, finding out which elements > left which subset to end up in another one. > (3) Even if a scipy array was not the most suitable structure to use > for these manipulations, that is the way the data are originally > manipulated in my code, so please let me know how I should convert > them into another data type (if needed at all). > > Many thanks > > Lorenzo > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bborcic at gmail.com Tue May 27 08:34:40 2008 From: bborcic at gmail.com (Boris Borcic) Date: Tue, 27 May 2008 14:34:40 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 1.2.0 In-Reply-To: <483B1CBE.8080607@pythonxy.com> References: <483B1CBE.8080607@pythonxy.com> Message-ID: Pierre Raybaut wrote: > Hi all, > > Python(x,y) 1.2.0 is now available on http://www.pythonxy.com. Hello, thank you for putting this together, this bundle is very interesting, but on the website I couldn't find a word on compatibility or conflict when one or the other tool is already pre-installed ?... Best, BB From garyrob at mac.com Tue May 27 15:17:44 2008 From: garyrob at mac.com (Gary Robinson) Date: Tue, 27 May 2008 15:17:44 -0400 Subject: [SciPy-user] problem with weave tutorial Message-ID: <20080527151744417795.f59d2ae9@mac.com> I installed scipy yesterday. from scipy import weave weave.test() runs fine. But when I try to run examples in the weave tutorial (http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/weave/doc/tutorial.txt), I get errors. For instance, the tutorial has the following: >>> a = 1 >>> a = weave.inline("return_val = Py::new_reference_to(Py::Int(a+1));",['a']) >>> a 2 But when I try to run it, I get the following error. Can anyone tell me why? This is on OS X 10.5.2. /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp: In function ?PyObject* compiled_func(PyObject*, PyObject*)?: /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp:663: error: ?Py? has not been declared /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp:663: error: ?Py? has not been declared /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp:663: error: ?Int? was not declared in this scope /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp:663: error: ?new_reference_to? was not declared in this scope /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp: In function ?PyObject* compiled_func(PyObject*, PyObject*)?: /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp:663: error: ?Py? has not been declared /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp:663: error: ?Py? has not been declared /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp:663: error: ?Int? was not declared in this scope /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp:663: error: ?new_reference_to? was not declared in this scope lipo: can't figure out the architecture type of: /var/folders/S5/S5m9tMBYH-KuPAoCKwCHQE+++TI/-Tmp-//cc59NLL3.out /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp: In function ?PyObject* compiled_func(PyObject*, PyObject*)?: /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp:663: error: ?Py? has not been declared /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp:663: error: ?Py? has not been declared /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp:663: error: ?Int? was not declared in this scope /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp:663: error: ?new_reference_to? was not declared in this scope /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp: In function ?PyObject* compiled_func(PyObject*, PyObject*)?: /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp:663: error: ?Py? has not been declared /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp:663: error: ?Py? has not been declared /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp:663: error: ?Int? was not declared in this scope /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp:663: error: ?new_reference_to? was not declared in this scope lipo: can't figure out the architecture type of: /var/folders/S5/S5m9tMBYH-KuPAoCKwCHQE+++TI/-Tmp-//cc59NLL3.out Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/weave/inline_tools.py", line 339, in inline **kw) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/weave/inline_tools.py", line 447, in compile_function verbose=verbose, **kw) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/weave/ext_tools.py", line 365, in compile verbose = verbose, **kw) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/weave/build_tools.py", line 269, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/distutils/core.py", line 176, in setup return old_setup(**new_attr) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/core.py", line 168, in setup raise SystemExit, "error: " + str(msg) scipy.weave.build_tools.CompileError: error: Command "g++ -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/weave -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/weave/scxx -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp -o /var/folders/S5/S5m9tMBYH-KuPAoCKwCHQE+++TI/-Tmp-/garyrob/python25_intermediate/compiler_0ce8a1fa01e8914c0a4825c7c67de6c6/Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.o" failed with exit status 1 Thanks, Gary -- Gary Robinson CTO Emergent Music, LLC personal email: garyrob at mac.com work email: grobinson at emergentmusic.com Company: http://www.emergentmusic.com Blog: http://www.garyrobinson.net From zunzun at zunzun.com Tue May 27 15:41:45 2008 From: zunzun at zunzun.com (James Phillips) Date: Tue, 27 May 2008 14:41:45 -0500 Subject: [SciPy-user] problem with weave tutorial In-Reply-To: <20080527151744417795.f59d2ae9@mac.com> References: <20080527151744417795.f59d2ae9@mac.com> Message-ID: <268756d30805271241u1bc769bdj5c6056a97accaa6b@mail.gmail.com> At the very bottom of the long, long list of errors and warnings is: scipy.weave.build_tools > > .CompileError: error: Command "g++ -arch ppc -arch i386 -isysroot > /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double > -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 > -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/weave > -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/weave/scxx > -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/include > -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c > /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp > -o > /var/folders/S5/S5m9tMBYH-KuPAoCKwCHQE+++TI/-Tmp-/garyrob/python25_intermediate/compiler_0ce8a1fa01e8914c0a4825c7c67de6c6/Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.o" > failed with exit status 1 You can try from a command prompt or user shell the actual command to the compiler from that error message: g++ -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u > > .sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd > -fno-common -dynamic -DNDEBUG -g -O3 > -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/weave > -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/weave/scxx > -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/include > -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c > /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp > -o > /var/folders/S5/S5m9tMBYH-KuPAoCKwCHQE+++TI/-Tmp-/garyrob/python25_intermediate/compiler_0ce8a1fa01e8914c0a4825c7c67de6c6/Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.o and see what the compiler tells you. James Phillips http://zunzun.com 2008/5/27 Gary Robinson : > I installed scipy yesterday... -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Tue May 27 15:54:51 2008 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 27 May 2008 19:54:51 +0000 Subject: [SciPy-user] problem with weave tutorial In-Reply-To: <268756d30805271241u1bc769bdj5c6056a97accaa6b@mail.gmail.com> References: <20080527151744417795.f59d2ae9@mac.com> <268756d30805271241u1bc769bdj5c6056a97accaa6b@mail.gmail.com> Message-ID: <1e2af89e0805271254g1c70a7e5u9951ff2309b16dfc@mail.gmail.com> Hi, On Tue, May 27, 2008 at 7:41 PM, James Phillips wrote: > At the very bottom of the long, long list of errors and warnings is: > You can try from a command prompt or user shell the actual command to the > compiler from that error message: > > g++ -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u Well, but isn't that going to generate the error messages above that: > /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp: In function 'PyObject* compiled_func(PyObject*, PyObject*)': > /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp:663: error: 'Py' has not been declared (etc) ? Best, Matthew From hoytak at gmail.com Tue May 27 16:01:31 2008 From: hoytak at gmail.com (Hoyt Koepke) Date: Tue, 27 May 2008 13:01:31 -0700 Subject: [SciPy-user] problem with weave tutorial In-Reply-To: <268756d30805271241u1bc769bdj5c6056a97accaa6b@mail.gmail.com> References: <20080527151744417795.f59d2ae9@mac.com> <268756d30805271241u1bc769bdj5c6056a97accaa6b@mail.gmail.com> Message-ID: <4db580fd0805271301h6e09f61h63c4dae6561f791f@mail.gmail.com> There's a couple things wrong here, and I think this is from an old version that is completely wrong now. First, the correct namespace is py, not Py. Second, I think a lot of the type conversion is handled automatically with py::object, i.e. the following works: a = weave.inline("return_val = py::object(a+1);",['a']) however, return_val is a py::object already, so a = weave.inline("return_val = a+1;", ['a']) also works. My general experience with weave is that you hardly ever need to explicitly work with py:: stuff unless you're doing stuff with lists, dicts, tuples, etc. It's almost never needed for ints and simple types. I don't know why this isn't in the tutorial; it seems that's pretty out of date. Perhaps a more efficient way to learn weave is simply to look at the examples in the scipy/weave/examples directory. There are quite a few there and they all work. :-) --Hoyt On Tue, May 27, 2008 at 12:41 PM, James Phillips wrote: > At the very bottom of the long, long list of errors and warnings is: > > scipy.weave.build_tools >> >> .CompileError: error: Command "g++ -arch ppc -arch i386 -isysroot >> /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double >> -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 >> -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/weave >> -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/weave/scxx >> -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/include >> -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c >> /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp >> -o >> /var/folders/S5/S5m9tMBYH-KuPAoCKwCHQE+++TI/-Tmp-/garyrob/python25_intermediate/compiler_0ce8a1fa01e8914c0a4825c7c67de6c6/Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.o" >> failed with exit status 1 > > You can try from a command prompt or user shell the actual command to the > compiler from that error message: > > g++ -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u >> >> .sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd >> -fno-common -dynamic -DNDEBUG -g -O3 >> -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/weave >> -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/weave/scxx >> -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/include >> -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c >> /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp >> -o >> /var/folders/S5/S5m9tMBYH-KuPAoCKwCHQE+++TI/-Tmp-/garyrob/python25_intermediate/compiler_0ce8a1fa01e8914c0a4825c7c67de6c6/Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.o > > and see what the compiler tells you. > > James Phillips > http://zunzun.com > > > 2008/5/27 Gary Robinson : >> >> I installed scipy yesterday... > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- +++++++++++++++++++++++++++++++++++ Hoyt Koepke UBC Department of Computer Science http://www.cs.ubc.ca/~hoytak/ hoytak at gmail.com +++++++++++++++++++++++++++++++++++ From cohen at slac.stanford.edu Tue May 27 15:54:00 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Tue, 27 May 2008 21:54:00 +0200 Subject: [SciPy-user] problem with weave tutorial In-Reply-To: <1e2af89e0805271254g1c70a7e5u9951ff2309b16dfc@mail.gmail.com> References: <20080527151744417795.f59d2ae9@mac.com> <268756d30805271241u1bc769bdj5c6056a97accaa6b@mail.gmail.com> <1e2af89e0805271254g1c70a7e5u9951ff2309b16dfc@mail.gmail.com> Message-ID: <483C66D8.7070506@slac.stanford.edu> I get the same error on my linux box and a recent svn version of scipy.... Johann Matthew Brett wrote: > Hi, > > On Tue, May 27, 2008 at 7:41 PM, James Phillips wrote: > >> At the very bottom of the long, long list of errors and warnings is: >> > > >> You can try from a command prompt or user shell the actual command to the >> compiler from that error message: >> >> g++ -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u >> > > Well, but isn't that going to generate the error messages above that: > > >> /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp: In function 'PyObject* compiled_func(PyObject*, PyObject*)': >> /Users/garyrob/.python25_compiled/sc_119f07fd9a656915569734d041f6ace92.cpp:663: error: 'Py' has not been declared >> > > (etc) > > ? > > Best, > > Matthew > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From contact at pythonxy.com Tue May 27 16:22:57 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Tue, 27 May 2008 22:22:57 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 1.2.0 Message-ID: <483C6DA1.20900@pythonxy.com> > > Pierre Raybaut wrote: >> > Hi all, >> > >> > Python(x,y) 1.2.0 is now available on http://www.pythonxy.com. >> > > Hello, thank you for putting this together, this bundle is very interesting, but > on the website I couldn't find a word on compatibility or conflict when one or > the other tool is already pre-installed ?... > > Best, BB It depends on what is pre-installed (could you be more specific?). I think that re-installing over an existing Python 2.5.2 and Python modules (as well as Eclipse and MinGW) won't be a problem as long as it is the same version and provided that you enter the same installation folder during the installation process. Otherwise, I didn't do any test to check conflicting issues with other Python versions, but I would not recommend it. This being said, that is a general topic, as Python(x,y) is "only" a distribution, so perhaps some people here would have a better knowledge on this matter than me. Thanks for your message. Pierre Raybaut From c-b at asu.edu Tue May 27 18:58:17 2008 From: c-b at asu.edu (Christopher Brown) Date: Tue, 27 May 2008 15:58:17 -0700 Subject: [SciPy-user] playing numpy arrays on a soundcard In-Reply-To: <48390079.4090408@asu.edu> References: <48390079.4090408@asu.edu> Message-ID: <483C9209.2030007@asu.edu> Hi List, There was a problem building pyaudiere on windows, caused by a docstring that was too long for the ms compiler. I added the docstrings from home over the weekend on my linux machine, and didn't get to test it until today. Sorry about that. This has been corrected. There is now a windows installer and a zip file of the source, available here: http://pal.asu.edu/packages/pyaudiere-0.1.win32-py2.5.exe http://pal.asu.edu/packages/pyaudiere-0.1.zip And for a straight up analog to matlab's wavplay, try this: def wavplay(buff, fs, pan=0): import audiere from time import sleep d = audiere.open_device() s = d.open_array(buff,fs) s.pan = pan s.play() while s.playing: sleep(.01) -- Chris From garyrob at mac.com Tue May 27 19:04:18 2008 From: garyrob at mac.com (Gary Robinson) Date: Tue, 27 May 2008 19:04:18 -0400 Subject: [SciPy-user] problem with weave tutorial Message-ID: <20080527190418737542.fde72cfa@mac.com> Hoyt: Many thanks. So, the idea is simply that the tutorial is ridiculously out-of-date, I guess. Hopefully someone will find time to fix that at some point! Since apparently there is no written resource that reliably explains things like the following question for the current weave, I'll ask it here: The examples you give: a = weave.inline("return_val = py::object(a+1);",['a']) a = weave.inline("return_val = a+1;", ['a']) don't explicitly do anything with reference counting. Can you give any hints about if/when it is necessary to explicitly deal with reference counting when using Weave? I'd rather not have to extrapolate that info purely from the examples -- but I will if necessary, of course. Many thanks, Gary -- Gary Robinson CTO Emergent Music, LLC personal email: garyrob at mac.com work email: grobinson at emergentmusic.com Company: http://www.emergentmusic.com Blog: http://www.garyrobinson.net From hoytak at gmail.com Tue May 27 19:17:31 2008 From: hoytak at gmail.com (Hoyt Koepke) Date: Tue, 27 May 2008 16:17:31 -0700 Subject: [SciPy-user] problem with weave tutorial In-Reply-To: <20080527190418737542.fde72cfa@mac.com> References: <20080527190418737542.fde72cfa@mac.com> Message-ID: <4db580fd0805271617v72a90951r5c50ab84aa6abafe@mail.gmail.com> > Since apparently there is no written resource that reliably explains things like > the following question for the current weave, I'll ask it here: I should also point out that Travis' numpy book does have a few decent chapters on weave, but IIRC it doesn't answer your questions. > The examples you give: > > a = weave.inline("return_val = py::object(a+1);",['a']) > > a = weave.inline("return_val = a+1;", ['a']) > > don't explicitly do anything with reference counting. Can you give any hints about if/when it > is necessary to explicitly deal with reference counting when using Weave? I'd rather not have > to extrapolate that info purely from the examples -- but I will if necessary, of course. AFAIK, never. The core of weave is the scxx library, and you can look in scipy/weave/scxx/object.h to see how it works. It has a lot of well-commented code dealing with reference counting in there, so I believe everything is handled automatically. If you have specific questions, that might be a good place to look. In my experience, the cpp code generated by weave is actually fairly easy to examine and figure out what's going on. --Hoyt +++++++++++++++++++++++++++++++++++ Hoyt Koepke UBC Department of Computer Science http://www.cs.ubc.ca/~hoytak/ hoytak at gmail.com +++++++++++++++++++++++++++++++++++ From hoytak at gmail.com Tue May 27 19:18:38 2008 From: hoytak at gmail.com (Hoyt Koepke) Date: Tue, 27 May 2008 16:18:38 -0700 Subject: [SciPy-user] problem with weave tutorial In-Reply-To: <4db580fd0805271617v72a90951r5c50ab84aa6abafe@mail.gmail.com> References: <20080527190418737542.fde72cfa@mac.com> <4db580fd0805271617v72a90951r5c50ab84aa6abafe@mail.gmail.com> Message-ID: <4db580fd0805271618o486b4b28g9e4b69953b06c133@mail.gmail.com> Correction: > I should also point out that Travis' numpy book does have a few decent > chapters on weave, but IIRC it doesn't answer your questions. should read "a few decent pages on weave"... My bad. Time for some coffee.... --Hoyt > > +++++++++++++++++++++++++++++++++++ > Hoyt Koepke > UBC Department of Computer Science > http://www.cs.ubc.ca/~hoytak/ > hoytak at gmail.com > +++++++++++++++++++++++++++++++++++ > -- +++++++++++++++++++++++++++++++++++ Hoyt Koepke UBC Department of Computer Science http://www.cs.ubc.ca/~hoytak/ hoytak at gmail.com +++++++++++++++++++++++++++++++++++ From zdzis1 at 31.pl Wed May 28 13:05:10 2008 From: zdzis1 at 31.pl (zdzis1) Date: Wed, 28 May 2008 19:05:10 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 1.2.0 References: <483B1CBE.8080607@pythonxy.com> Message-ID: This is great stuff and very useful, especially now that Enthon has become practically commercial. Thanks A LOT z Pierre Raybaut wrote: > Hi all, > > Python(x,y) 1.2.0 is now available on http://www.pythonxy.com. > > Changes history > 05-26-2008 - Version 1.2.0 : > > * Added: > o All Users / Current User installation options > o Installation directories (Eclipse, MinGW, Python, ...) > customization > o Everything can be installed in one directory only > * Updated: > o PyQt 4.4.2 (PyQwt is no longer included) > o Qt Eclipse Integration 1.4.0 > o Qt Eclipse Help 4.4.0 > o SymPy 0.5.15 > * Corrected: > o IPython bug with PyQt4 (Matplotlib and Qt4 interactive > consoles): warning messages ("QCoreApplication::exec: The event loop is > already running") were displayed when "help()" was entered for example > > As for 1.1.x releases, installer patches will be available for future > upgrades in order to update Python(x,y) without having to download the > whole package each time. > > Regards, > Pierre Raybaut From contact at pythonxy.com Wed May 28 16:03:34 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Wed, 28 May 2008 22:03:34 +0200 Subject: [SciPy-user] [ Python(x, y) ] New release : 1.2.1 - Critical bug fix Message-ID: <483DBA96.2090201@pythonxy.com> Hi all, Python(x,y) 1.2.1 is now available on http://www.pythonxy.com. It is highly recommended to update if you have installed the previous release: a critical bug in Eclipse installation is now fixed. Besides, it is quite a small update patch (<1Mb) so it's definitely worth it. Changes history 05 -28 -2008 - Version 1.2.1 : * Corrected: o [Critical bug!] Eclipse: Python interpreter paths were not changed according to the installation directory o PyQt 4.4.2: minor installation bugs (file type associations, ...) * Added (minor changes): o Qt Assistant shortcut (start menu and "Welcome to Python(x,y)") o Windows explorer integration: IPython System Shell console (with Qt support) instead of two consoles (IPython (Qt) and Windows cmd.exe) Regards, Pierre Raybaut From spmcinerney at hotmail.com Wed May 28 16:17:31 2008 From: spmcinerney at hotmail.com (Stephen McInerney) Date: Wed, 28 May 2008 13:17:31 -0700 Subject: [SciPy-user] [ Python(x, y) ] New release : 1.2.1 - Critical bug fix In-Reply-To: <483DBA96.2090201@pythonxy.com> References: <483DBA96.2090201@pythonxy.com> Message-ID: Thanks Pierre I was going to mail you that bug anyway. Also, I think you are still defaulting some paths (e.g. MingW) to be under C:\Program Files\ which as I mentioned is a huge big problem with Windows Vista's UAC, especially when you install as Administrator then try to run it as a user - it triggers loads of UAC nag warnings anytime you do file operations. May I suggest it is best to prompt the user for the base directory of the entire install (e.g. "C:\DEV") then after that, automatically default all the paths to e.g. C:\DEV\ Thanks for the lightning turnaround on patches, by the way. I am happily running with 1.2.0 Regards, Stephen > Date: Wed, 28 May 2008 22:03:34 +0200> From: contact at pythonxy.com> To: pythonxy at googlegroups.com; scipy-user at scipy.org> Subject: [ Python(x,y) ] New release : 1.2.1 - Critical bug fix> > > Hi all,> > Python(x,y) 1.2.1 is now available on http://www.pythonxy.com.> > It is highly recommended to update if you have installed the previous > release: a critical bug in Eclipse installation is now fixed.> Besides, it is quite a small update patch (<1Mb) so it's definitely > worth it.> > Changes history> 05 -28 -2008 - Version 1.2.1 :> > * Corrected:> o [Critical bug!] Eclipse: Python interpreter paths were not > changed according to the installation directory> o PyQt 4.4.2: minor installation bugs (file type > associations, ...)> * Added (minor changes):> o Qt Assistant shortcut (start menu and "Welcome to Python(x,y)")> o Windows explorer integration: IPython System Shell console > (with Qt support) instead of two consoles (IPython (Qt) and Windows cmd.exe)> > Regards,> Pierre Raybaut> > > --~--~---------~--~----~------------~-------~--~----~> You received this message because you are subscribed to the Google Groups "python(x,y)" group.> To post to this group, send email to pythonxy at googlegroups.com> To unsubscribe from this group, send email to pythonxy-unsubscribe at googlegroups.com> For more options, visit this group at http://groups.google.com/group/pythonxy?hl=en> -~----------~----~----~----~------~----~------~--~---> _________________________________________________________________ Give to a good cause with every e-mail. Join the i?m Initiative from Microsoft. http://im.live.com/Messenger/IM/Join/Default.aspx?souce=EML_WL_ GoodCause -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Wed May 28 20:51:25 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 28 May 2008 20:51:25 -0400 Subject: [SciPy-user] Splitting up and Joining Sets In-Reply-To: References: Message-ID: 2008/5/27 Lorenzo Isella : > I do not know for sure whether a SciPy array is the right tool for > what I have in mind, but I would like to post this here anyhow. > Say that you have a set of elements (plain integers): > {1,2,3,4,5,6,7,8,9,10}, which can end up split into several subsets > [no element is repeated from a subset to another, I always have 10 > elements as a whole]. Ordering does not matter, so {3,4,5} is the same > as {5,3,4}. and {{1,2},{7,3}} is the same as {{7,3},{2,1}}. > Now, this is what I would like to do: > (1) Given e.g. the subsets {1,4,10},{3,7,6},{5,8,2} {9} I would like > to be able to find out which sets were merged/split in a new > configuration, e.g. {1,4,10,9} ,{3,7,6},{5,8,2} (joining of two sets) > or {1,4,10},{3,7,6},{5,8} ,{9}, {2} (splitting of two sets). > (2) There then may be a more subtle case, in which the total number of > sets stays the same, but the structure of the sets changes e.g.: > {1,4,10},{3,7,6},{5,8,2} {9} ---> {1,4,10},{3,7,6},{5,8} {9,2}, which > I also would like to be able to detect, finding out which elements > left which subset to end up in another one. > (3) Even if a scipy array was not the most suitable structure to use > for these manipulations, that is the way the data are originally > manipulated in my code, so please let me know how I should convert > them into another data type (if needed at all). If you want efficient algorithms to work with these objects, they are known as "partitions of a set" or "equivalence relations", and there are various references. One of the volumes of Knuth's The Art of Computer Programming has a section on equivalence relations, if I recall correctly. If you're going to go about it this way, you might consider representing each partition as an array A of integers, in which A[i] is the smallest integer in the set containing i. Then: (1) for a crude O(N**2) algorithm: def refinement(A,B): # untested! for s in np.unique(A): # for each set if len(np.unique(B[A==s]))>1: # if it came from more than one return False return True Faster, more arcane algorithms are certainly possible: def refinement(A,B): # untested! idx = np.argsort(A) # idx lets you easily get a list of elements in a given subset return np.all(np.diff(B[idx])[np.diff(A[idx])==0]==0) (2) can probably be accomplished similarly depending on exactly what you want. Keep in mind that there may be no obvious way to associate sets in the "before" and "after" partitions. You could describe how to get from one to the other refining the first one, then joining some sets of the refined partiton to get the second in a minimal way; this seems to be called taking the "meet" of the two partitions. Here's a crude algorithm: def meet(A,B): # untested r = np.arange(len(A)) C = B.copy() for s in np.unique(A): ss = A==s for t in np.unique(B): ts = B==t C[ss & ts] = r[ss & ts][0] Given C the meet of A and B, a reasonable list of elements that have "moved" is (C!=A) | (C!=B). Not ideal, sometimes it will say too many elements moved. Good luck, Anne From junky at nc.rr.com Wed May 28 22:52:49 2008 From: junky at nc.rr.com (Mihail Sichitiu) Date: Wed, 28 May 2008 22:52:49 -0400 Subject: [SciPy-user] SciPy-user Digest, Vol 57, Issue 44 In-Reply-To: References: Message-ID: > Thanks Robert, you're right: numpy was busted. Unfortunately it's > still busted. I installed the ActivePython 2.5 and made sure that > it's the python that it's getting used, then tried to compile numpy again but I get this error message (seem to be with the gcc compilation options). Unfortunately I don't know enough to fix it. I'm attaching below the output of the "python setup.py build". Really appreciate your help. python setup.py build Running from numpy source directory. non-existing path in 'numpy/distutils': 'site.cfg' F2PY Version 2_5234 blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/ vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] running build running scons customize UnixCCompiler Found executable /usr/bin/gcc customize NAGFCompiler Could not locate executable f95 customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize IBMFCompiler Could not locate executable xlf90 Could not locate executable xlf customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize GnuFCompiler Found executable /usr/local/bin/g77 gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize UnixCCompiler customize UnixCCompiler using scons running config_cc unifing config_cc, config, build_clib, build_ext, build commands -- compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands -- fcompiler options running build_src building py_modules sources creating build/src.macosx-10.3-ppc-2.5 creating build/src.macosx-10.3-ppc-2.5/numpy creating build/src.macosx-10.3-ppc-2.5/numpy/distutils building extension "numpy.core.multiarray" sources creating build/src.macosx-10.3-ppc-2.5/numpy/core Generating build/src.macosx-10.3-ppc-2.5/numpy/core/config.h customize NAGFCompiler customize AbsoftFCompiler customize IBMFCompiler customize IntelFCompiler customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler using config C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fPIC -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 - Wall -Wstrict-prototypes compile options: '-I/Library/Frameworks/Python.framework/Versions/2.5/ include/python2.5 -Inumpy/core/src -Inumpy/core/include -I/Library/ Frameworks/Python.framework/Versions/2.5/include/python2.5 -I/Library/ Frameworks/Python.framework/Versions/2.5/include/python2.5 -c' gcc: _configtest.c gcc: cannot specify -o with -c or -S and multiple compilations gcc: cannot specify -o with -c or -S and multiple compilations failure. removing: _configtest.c _configtest.o Traceback (most recent call last): File "setup.py", line 96, in setup_package() File "setup.py", line 89, in setup_package configuration=configuration ) File "/Users/mlsichit/temp/numpy/numpy/distutils/core.py", line 184, in setup return old_setup(**new_attr) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/core.py", line 151, in setup dist.run_commands() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/dist.py", line 974, in run_commands self.run_command(cmd) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/Users/mlsichit/temp/numpy/numpy/distutils/command/ build.py", line 40, in run old_build.run(self) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/Users/mlsichit/temp/numpy/numpy/distutils/command/ build_src.py", line 130, in run self.build_sources() File "/Users/mlsichit/temp/numpy/numpy/distutils/command/ build_src.py", line 147, in build_sources self.build_extension_sources(ext) File "/Users/mlsichit/temp/numpy/numpy/distutils/command/ build_src.py", line 250, in build_extension_sources sources = self.generate_sources(sources, ext) File "/Users/mlsichit/temp/numpy/numpy/distutils/command/ build_src.py", line 307, in generate_sources source = func(extension, build_dir) File "numpy/core/setup.py", line 83, in generate_config_h raise SystemError,"Failed to test configuration. "\ SystemError: Failed to test configuration. See previous error messages for more information. -------------------------------- Thanks for the help, Mihai > ------------------------------ > > Message: 3 > Date: Sat, 24 May 2008 21:32:32 -0500 > From: "Robert Kern" > Subject: Re: [SciPy-user] Help with installing scypy > To: "SciPy Users List" > Message-ID: > <3d375d730805241932i2c2cc9bau40e8a33c308c1985 at mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > On Sat, May 24, 2008 at 9:19 PM, Mihail Sichitiu > wrote: >> I tried to follow the instructions here: >> http://www.scipy.org/Installing_SciPy/Mac_OS_X#head- >> cfc918ee2c334f8cf364c08fff51e210e7913062 >> for installing numpy and scipy on a Mac OS X (10.4.11), but no luck. >> According to the advice there here are my software versions: >> Legolas:~/temp/scipy root# gcc --version >> gcc (GCC) 3.3 20030304 (Apple Computer, Inc. build 1819) >> Copyright (C) 2002 Free Software Foundation, Inc. >> This is free software; see the source for copying conditions. >> There is NO >> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR >> PURPOSE. >> Legolas:~/temp/scipy root# gfortran --version >> GNU Fortran (GCC) 4.2.1 >> Legolas:~/temp/scipy root# g77 --version >> GNU Fortran (GCC) 3.4.6 >> Legolas:~/temp/scipy root# >> >> Also, the message error when trying to make scipy (after building and >> installing numpy): >> Legolas:~/temp/scipy root# python setup.py build_src build_clib >> --fcompiler=gnu build_ext --fcompiler=gnu build >> Traceback (most recent call last): >> File "setup.py", line 92, in ? >> setup_package() >> File "setup.py", line 63, in setup_package >> from numpy.distutils.core import setup >> File >> "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/ >> python2.3/site-packages/numpy/__init__.py", >> line 93, in ? >> import add_newdocs >> File >> "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/ >> python2.3/site-packages/numpy/add_newdocs.py", >> line 9, in ? >> from lib import add_newdoc >> File >> "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/ >> python2.3/site-packages/numpy/lib/__init__.py", >> line 18, in ? >> from io import * >> File >> "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/ >> python2.3/site-packages/numpy/lib/io.py", >> line 16, in ? >> from _compiled_base import packbits, unpackbits >> ImportError: cannot import name packbits >> Any ideas what's going wrong? > > Your numpy install is busted somehow. What version of numpy did you > try to install? Did you have a previous version sitting there? > > Also, the instructions recommend (and I strongly concur) that you > install the official binaries from python.org instead of using the > system-installed Python. That's probably not your problem, but it's > worth doing. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > From didier.rano at gmail.com Wed May 28 23:04:04 2008 From: didier.rano at gmail.com (didier rano) Date: Wed, 28 May 2008 23:04:04 -0400 Subject: [SciPy-user] SciPy-user Digest, Vol 57, Issue 44 In-Reply-To: References: Message-ID: Hi, I try to compile scipy (subversion) and timeseries module on a web server shared hosting (I am alone to use scipy on a web server?). I have some problems to use, because the compilation seems to be ok. lib/python2.4/site-packages/scipy/linalg/flapack.so: undefined symbol: atl_f77wrap_zherk__ I cannot use gfortran, but only g77. I have compiled numpy (subversion), lapack, atlas, fftw but I cannot compile UMFPACK. (without gfortran, I suppose) Thank you for your help Didier Rano -------------- next part -------------- An HTML attachment was scrubbed... URL: From junky at nc.rr.com Wed May 28 23:44:17 2008 From: junky at nc.rr.com (Mihail Sichitiu) Date: Wed, 28 May 2008 23:44:17 -0400 Subject: [SciPy-user] solved numpy, still problems with scipy In-Reply-To: References: Message-ID: <8A86E812-818A-4834-A5C3-0BA90DD66A77@nc.rr.com> Abort, abort. I figured (by pure luck) the problem with numpy: I reverted to gcc 4.0 (with gcc_select 4.0) and then it compiled fine and it installed fine. I ran the test in python and it worked great. But I still have problems with scipy. It seems to be a problem with the final linking (not finding some libraries?). What is lcc_dynamic? Alternatively, I'd be happy if I'd find a binary installation for PowerPC, Tiger, but I wasn't able to - I'm ultimately trying to install Gnu Radio that depends on scipy. Thanks a lot! python setup.py build_src build_clib --fcompiler=gnu build_ext -- fcompiler=gnu build mkl_info: libraries mkl,vml,guide not found in /Library/Frameworks/ Python.framework/Versions/2.5/lib libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib libraries mkl,vml,guide not found in /opt/local/lib libraries mkl,vml,guide not found in /sw/lib NOT AVAILABLE fftw3_info: libraries fftw3 not found in /Library/Frameworks/Python.framework/ Versions/2.5/lib FOUND: libraries = ['fftw3'] library_dirs = ['/usr/local/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/local/include'] djbfft_info: NOT AVAILABLE blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/ vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] non-existing path in 'scipy/linsolve': 'tests' umfpack_info: libraries umfpack not found in /Library/Frameworks/ Python.framework/Versions/2.5/lib libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib libraries umfpack not found in /opt/local/lib libraries umfpack not found in /sw/lib /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site- packages/numpy/distutils/system_info.py:414: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/ umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE running build_src building py_modules sources building library "dfftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "fitpack" sources building library "superlu_src" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "c_misc" sources building library "cephes" sources building library "mach" sources building library "toms" sources building library "amos" sources building library "cdf" sources building library "specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.fftpack._fftpack" sources f2py options: [] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. building extension "scipy.fftpack.convolve" sources f2py options: [] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. building extension "scipy.integrate._quadpack" sources building extension "scipy.integrate._odepack" sources building extension "scipy.integrate.vode" sources f2py options: [] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. building extension "scipy.interpolate._fitpack" sources building extension "scipy.interpolate.dfitpack" sources f2py options: [] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. adding 'build/src.macosx-10.3-ppc-2.5/scipy/interpolate/dfitpack- f2pywrappers.f' to sources. building extension "scipy.io.numpyio" sources building extension "scipy.lib.blas.fblas" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. adding 'build/src.macosx-10.3-ppc-2.5/build/src.macosx-10.3- ppc-2.5/scipy/lib/blas/fblas-f2pywrappers.f' to sources. building extension "scipy.lib.blas.cblas" sources adding 'build/src.macosx-10.3-ppc-2.5/scipy/lib/blas/cblas.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. building extension "scipy.lib.lapack.flapack" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. building extension "scipy.lib.lapack.clapack" sources adding 'build/src.macosx-10.3-ppc-2.5/scipy/lib/lapack/ clapack.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. building extension "scipy.lib.lapack.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. building extension "scipy.lib.lapack.atlas_version" sources building extension "scipy.linalg.fblas" sources adding 'build/src.macosx-10.3-ppc-2.5/scipy/linalg/fblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. adding 'build/src.macosx-10.3-ppc-2.5/build/src.macosx-10.3- ppc-2.5/scipy/linalg/fblas-f2pywrappers.f' to sources. building extension "scipy.linalg.cblas" sources adding 'build/src.macosx-10.3-ppc-2.5/scipy/linalg/cblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. building extension "scipy.linalg.flapack" sources adding 'build/src.macosx-10.3-ppc-2.5/scipy/linalg/flapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. adding 'build/src.macosx-10.3-ppc-2.5/build/src.macosx-10.3- ppc-2.5/scipy/linalg/flapack-f2pywrappers.f' to sources. building extension "scipy.linalg.clapack" sources adding 'build/src.macosx-10.3-ppc-2.5/scipy/linalg/clapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. building extension "scipy.linalg._flinalg" sources f2py options: [] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. building extension "scipy.linalg.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. building extension "scipy.linalg.atlas_version" sources building extension "scipy.linalg._iterative" sources f2py options: [] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. building extension "scipy.linsolve._zsuperlu" sources building extension "scipy.linsolve._dsuperlu" sources building extension "scipy.linsolve._csuperlu" sources building extension "scipy.linsolve._ssuperlu" sources building extension "scipy.linsolve.umfpack.__umfpack" sources building extension "scipy.odr.__odrpack" sources building extension "scipy.optimize._minpack" sources building extension "scipy.optimize._zeros" sources building extension "scipy.optimize._lbfgsb" sources f2py options: [] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources f2py options: [] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. building extension "scipy.optimize.minpack2" sources f2py options: [] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. building extension "scipy.signal.sigtools" sources building extension "scipy.signal.spline" sources building extension "scipy.sparse._sparsetools" sources building extension "scipy.special._cephes" sources building extension "scipy.special.specfun" sources f2py options: ['--no-wrap-functions'] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. building extension "scipy.stats.statlib" sources f2py options: ['--no-wrap-functions'] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. building extension "scipy.stats.futil" sources f2py options: [] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. building extension "scipy.stats.mvn" sources f2py options: [] adding 'build/src.macosx-10.3-ppc-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-ppc-2.5' to include_dirs. adding 'build/src.macosx-10.3-ppc-2.5/scipy/stats/mvn- f2pywrappers.f' to sources. building extension "scipy.ndimage._nd_image" sources building data_files sources running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize GnuFCompiler Found executable /usr/local/bin/g77 gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler using build_clib running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext library 'mach' defined more than once, overwriting build_info {'sources': ['scipy/integrate/mach/d1mach.f', 'scipy/integrate/mach/ i1mach.f', 'scipy/integrate/mach/r1mach.f', 'scipy/integrate/mach/ xerror.f'], 'config_fc': {'noopt': ('scipy/integrate/setup.pyc', 1)}, 'source_languages': ['f77']}... with {'sources': ['scipy/special/mach/d1mach.f', 'scipy/special/mach/ i1mach.f', 'scipy/special/mach/r1mach.f', 'scipy/special/mach/ xerror.f'], 'config_fc': {'noopt': ('scipy/special/setup.pyc', 1)}, 'source_languages': ['f77']}... extending extension 'scipy.linsolve._zsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.linsolve._dsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.linsolve._csuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.linsolve._ssuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] customize UnixCCompiler customize UnixCCompiler using build_ext customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler using build_ext building 'scipy.fftpack.convolve' extension compiling C sources C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fPIC -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 - Wall -Wstrict-prototypes compile options: '-DSCIPY_FFTW3_H -I/usr/local/include -Ibuild/ src.macosx-10.3-ppc-2.5 -I/Library/Frameworks/Python.framework/ Versions/2.5/lib/python2.5/site-packages/numpy/core/include -I/ Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c' gcc: scipy/fftpack/src/convolve.c gcc: build/src.macosx-10.3-ppc-2.5/scipy/fftpack/convolvemodule.c /usr/local/bin/g77 -g -Wall -g -Wall -undefined dynamic_lookup - bundle build/temp.macosx-10.3-ppc-2.5/build/src.macosx-10.3-ppc-2.5/ scipy/fftpack/convolvemodule.o build/temp.macosx-10.3-ppc-2.5/scipy/ fftpack/src/convolve.o build/temp.macosx-10.3-ppc-2.5/build/ src.macosx-10.3-ppc-2.5/fortranobject.o -L/usr/local/lib -L/usr/local/ lib/gcc/powerpc-apple-darwin8.8.0/3.4.6 -Lbuild/temp.macosx-10.3- ppc-2.5 -ldfftpack -lfftw3 -lg2c -lcc_dynamic -o build/ lib.macosx-10.3-ppc-2.5/scipy/fftpack/convolve.so /usr/bin/ld: can't locate file for: -lcc_dynamic collect2: ld returned 1 exit status /usr/bin/ld: can't locate file for: -lcc_dynamic collect2: ld returned 1 exit status error: Command "/usr/local/bin/g77 -g -Wall -g -Wall -undefined dynamic_lookup -bundle build/temp.macosx-10.3-ppc-2.5/build/ src.macosx-10.3-ppc-2.5/scipy/fftpack/convolvemodule.o build/ temp.macosx-10.3-ppc-2.5/scipy/fftpack/src/convolve.o build/ temp.macosx-10.3-ppc-2.5/build/src.macosx-10.3-ppc-2.5/ fortranobject.o -L/usr/local/lib -L/usr/local/lib/gcc/powerpc-apple- darwin8.8.0/3.4.6 -Lbuild/temp.macosx-10.3-ppc-2.5 -ldfftpack -lfftw3 -lg2c -lcc_dynamic -o build/lib.macosx-10.3-ppc-2.5/scipy/fftpack/ convolve.so" failed with exit status 1 Legolas:~/temp/scipy-0.6.0 root# Your help is highly appreciated, M. From robert.kern at gmail.com Thu May 29 00:18:27 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 28 May 2008 23:18:27 -0500 Subject: [SciPy-user] solved numpy, still problems with scipy In-Reply-To: <8A86E812-818A-4834-A5C3-0BA90DD66A77@nc.rr.com> References: <8A86E812-818A-4834-A5C3-0BA90DD66A77@nc.rr.com> Message-ID: <3d375d730805282118u1d316138o1ebb373e0b811190@mail.gmail.com> On Wed, May 28, 2008 at 10:44 PM, Mihail Sichitiu wrote: > > Abort, abort. I figured (by pure luck) the problem with numpy: I > reverted to gcc 4.0 (with gcc_select 4.0) and then it compiled fine > and it installed fine. I ran the test in python and it worked great. > But I still have problems with scipy. It seems to be a problem with > the final linking (not finding some libraries?). What is lcc_dynamic? It's used for g77/gcc 3.x. You must use gfortran if you are using gcc 4.0. You cannot use g77 with gcc 4.x. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From contact at pythonxy.com Thu May 29 05:07:39 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Thu, 29 May 2008 11:07:39 +0200 Subject: [SciPy-user] [ Python(x, y) ] New release : 1.2.1 - Critical bug fix In-Reply-To: References: <483DBA96.2090201@pythonxy.com> Message-ID: <629b08a40805290207y156bc331q7279a641a32fc277@mail.gmail.com> Hi Stephen, 2008/5/28 Stephen McInerney : > Also, I think you are still defaulting some paths (e.g. MingW) > to be under C:\Program Files\ which as I mentioned is a huge big > problem with Windows Vista's UAC, especially when you install as > Administrator then try to run it as a user - it triggers loads of > UAC nag warnings anytime you do file operations. > > May I suggest it is best to prompt the user for the base directory > of the entire install (e.g. "C:\DEV") then after that, > automatically default > all the paths to e.g. C:\DEV\ > I think that is a good compromise indeed, and I will update the installer shortly. BTW, I will probably include the new NumPy release (1.1.0)... unless I wait for the forthcoming IPython 0.8.3 and matplotlib 0.91.3 which should be available in a few days. > > Thanks for the lightning turnaround on patches, by the way. > I am happily running with 1.2.0 > You're welcome! Regards, Pierre > > > Regards, > Stephen > > > > Date: Wed, 28 May 2008 22:03:34 +0200 > > From: contact at pythonxy.com > > To: pythonxy at googlegroups.com; scipy-user at scipy.org > > Subject: [ Python(x,y) ] New release : 1.2.1 - Critical bug fix > > > > > > > Hi all, > > > > Python(x,y) 1.2.1 is now available on http://www.pythonxy.com. > > > > It is highly recommended to update if you have installed the previous > > release: a critical bug in Eclipse installation is now fixed. > > Besides, it is quite a small update patch (<1Mb) so it's definitely > > worth it. > > > > Changes history > > 05 -28 -2008 - Version 1.2.1 : > > > > * Corrected: > > o [Critical bug!] Eclipse: Python interpreter paths were not > > changed according to the installation directory > > o PyQt 4.4.2: minor installation bugs (file type > > associations, ...) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spmcinerney at hotmail.com Thu May 29 05:12:36 2008 From: spmcinerney at hotmail.com (Stephen McInerney) Date: Thu, 29 May 2008 02:12:36 -0700 Subject: [SciPy-user] [ Python(x, y) ] New release : 1.2.1 - Critical bug fix In-Reply-To: <629b08a40805290207y156bc331q7279a641a32fc277@mail.gmail.com> References: <483DBA96.2090201@pythonxy.com> <629b08a40805290207y156bc331q7279a641a32fc277@mail.gmail.com> Message-ID: Sorry to be unclear, I really meant to say: - first ask the user what the BaseDir for the install is, - then default each individual package's install-dir to /, but also let them override that on a per-package basis. Regards, Stephen Date: Thu, 29 May 2008 11:07:39 +0200From: contact at pythonxy.comTo: pythonxy at googlegroups.comSubject: Re: [ Python(x,y) ] New release : 1.2.1 - Critical bug fixCC: scipy-user at scipy.org Hi Stephen, 2008/5/28 Stephen McInerney : Also, I think you are still defaulting some paths (e.g. MingW)to be under C:\Program Files\ which as I mentioned is a huge bigproblem with Windows Vista's UAC, especially when you install asAdministrator then try to run it as a user - it triggers loads ofUAC nag warnings anytime you do file operations. May I suggest it is best to prompt the user for the base directoryof the entire install (e.g. "C:\DEV") then after that, automatically default all the paths to e.g. C:\DEV\ I think that is a good compromise indeed, and I will update the installer shortly. BTW, I will probably include the new NumPy release (1.1.0)... unless I wait for the forthcoming IPython 0.8.3 and matplotlib 0.91.3 which should be available in a few days. Thanks for the lightning turnaround on patches, by the way.I am happily running with 1.2.0 You're welcome! Regards,Pierre _________________________________________________________________ E-mail for the greater good. Join the i?m Initiative from Microsoft. http://im.live.com/Messenger/IM/Join/Default.aspx?source=EML_WL_ GreaterGood -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Thu May 29 08:11:36 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 29 May 2008 21:11:36 +0900 Subject: [SciPy-user] [ Python(x, y) ] New release : 1.2.1 - Critical bug fix In-Reply-To: <483DBA96.2090201@pythonxy.com> References: <483DBA96.2090201@pythonxy.com> Message-ID: <483E9D78.6090007@ar.media.kyoto-u.ac.jp> Pierre Raybaut wrote: > Hi all, > > Python(x,y) 1.2.1 is now available on http://www.pythonxy.com. > > It is highly recommended to update if you have installed the previous > release: a critical bug in Eclipse installation is now fixed. > Besides, it is quite a small update patch (<1Mb) so it's definitely > worth it. > Congratulation for your work. I tried to download the sources, and could not find a link on the website ? thanks, David From contact at pythonxy.com Thu May 29 17:08:28 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Thu, 29 May 2008 23:08:28 +0200 Subject: [SciPy-user] [ Python(x, y) ] New release : 1.2.1 - Critical bug fix In-Reply-To: References: Message-ID: <483F1B4C.5060501@pythonxy.com> scipy-user-request at scipy.org a ?crit : >> Python(x,y) 1.2.1 is now available on http://www.pythonxy.com. >> >> It is highly recommended to update if you have installed the previous >> release: a critical bug in Eclipse installation is now fixed. >> Besides, it is quite a small update patch (<1Mb) so it's definitely >> worth it. >> > > Congratulation for your work. I tried to download the sources, and could > not find a link on the website ? > > thanks, > > David > Thanks. Actually, sources were available at the beginning of the project, but then priority was given to releases and patches. Furthermore, I'm working on it at home where I'm currently struggling to upload new releases due to bandwidth limitations. Right now for example, I have been uploading (or more exactly trying to upload) the new release for 3 hours... That can be very discouraging sometimes. This being said, I will update the sources again shortly (but these are not really "sources", the real sources are distributed by the developers of included packages: this is just a redistribution, like a zip archive containing all these binary packages). Regards, Pierre From contact at pythonxy.com Thu May 29 17:25:24 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Thu, 29 May 2008 23:25:24 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 1.2.2 Message-ID: <483F1F44.5020604@pythonxy.com> Hi all, Python(x,y) 1.2.2 is now available on http://www.pythonxy.com. Changes history 05 -29 -2008 - Version 1.2.2 : * Updated: o NumPy 1.1.0 (see release notes: http://sourceforge.net/project/shownotes.php?release_id=602575&group_id=1369) o IPython 0.8.3 (see release notes: http://ipython.scipy.org/moin/WhatsNew083) * Added: o Windows installer: Python(x,y) installation folder (base directory) may be customized, then one can either install packages in this base directory (default paths) or customize the installation folders (possibly outside the base directory) * Corrected: o Documentation: many broken shortcuts were fixed Regards, Pierre Raybaut From millman at berkeley.edu Thu May 29 18:11:17 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 29 May 2008 15:11:17 -0700 Subject: [SciPy-user] ANN: NumPy 1.1.0 Message-ID: I'm pleased to announce the release of NumPy 1.1.0. NumPy is the fundamental package needed for scientific computing with Python. It contains: * a powerful N-dimensional array object * sophisticated (broadcasting) functions * basic linear algebra functions * basic Fourier transforms * sophisticated random number capabilities * tools for integrating Fortran code. Besides it's obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide-variety of databases. This is the first minor release since the 1.0 release in October 2006. There are a few major changes, which introduce some minor API breakage. In addition this release includes tremendous improvements in terms of bug-fixing, testing, and documentation. For information, please see the release notes: http://sourceforge.net/project/shownotes.php?release_id=602575&group_id=1369 Thank you to everybody who contributed to this release. Enjoy, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From fperez.net at gmail.com Thu May 29 20:25:55 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 29 May 2008 17:25:55 -0700 Subject: [SciPy-user] [ Python(x,y) ] New release : 1.2.2 In-Reply-To: <483F1F44.5020604@pythonxy.com> References: <483F1F44.5020604@pythonxy.com> Message-ID: On Thu, May 29, 2008 at 2:25 PM, Pierre Raybaut > o IPython 0.8.3 (see release notes: > http://ipython.scipy.org/moin/WhatsNew083) Wow, that was quick :) Thanks! Cheers, f From fperez.net at gmail.com Thu May 29 22:44:22 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 29 May 2008 19:44:22 -0700 Subject: [SciPy-user] Tutorials at Scipy 2008 Message-ID: Hi all, Travis Oliphant and myself have signed up to coordinate the tutorials sessions at this year's SciPy conference. Our tentative plan is described here: http://scipy.org/SciPy2008/Tutorials but it basically consists of holding in parallel: 1. A 2-day hands-on tutorial for beginners. 2. A set of 2 or 4 hour sessions on special topics. We need input from people on: - Do you like this idea? - If yes for #1, any suggestions/wishes? Eric Jones, Travis O and myself have all taught similar things and could potentially do it again, but none of us is trying to impose it. If someone else wants to do it, by all means mention it. The job could be split across multiple people once an agenda is organized. - For #2, please go to the wiki and fill in ideas for topics and/or presenters. We'll need a list of viable topics with actual presenters before we start narrowing down the schedule into something more concrete. Feel free to either discuss things here or to just put topics on the wiki. I find wikis to be a poor place for conversation but excellent for summarizing items. I'll try to update the wiki with ideas that arise here, but feel free to directly edit the wiki if you just want to suggest a specific topic or brief piece of info, do NOT feel like you have to vet anything on list. Cheers, Travis and Fernando. From david at ar.media.kyoto-u.ac.jp Thu May 29 23:15:21 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 30 May 2008 12:15:21 +0900 Subject: [SciPy-user] [ Python(x, y) ] New release : 1.2.1 - Critical bug fix In-Reply-To: <483F1B4C.5060501@pythonxy.com> References: <483F1B4C.5060501@pythonxy.com> Message-ID: <483F7149.1050800@ar.media.kyoto-u.ac.jp> Pierre Raybaut wrote: > Right now for example, I have > been uploading (or more exactly trying to upload) the new release for 3 > hours... That can be very discouraging sometimes. > Ok, thanks for the clarification. > This being said, I will update the sources again shortly (but these are > not really "sources", the real sources are distributed by the developers > of included packages: this is just a redistribution, like a zip archive > containing all these binary packages). > Sure, but I was curious to see how you did build the binaries and how the update system works, cheers, David From david at ar.media.kyoto-u.ac.jp Thu May 29 23:52:29 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 30 May 2008 12:52:29 +0900 Subject: [SciPy-user] Tutorials at Scipy 2008 In-Reply-To: References: Message-ID: <483F79FD.7030309@ar.media.kyoto-u.ac.jp> Fernando Perez wrote: > Feel free to either discuss things here or to just put topics on the > wiki. I find wikis to be a poor place for conversation but excellent > for summarizing items. I'll try to update the wiki with ideas that > arise here, but feel free to directly edit the wiki if you just want > to suggest a specific topic or brief piece of info, do NOT feel like > you have to vet anything on list. > > Some people have asked me about numscons, but I don't know if an interesting enough topic; build issues are pretty boring compared to the proposed tutorials, and I guess most people interested in it know how to go through it. As I haven't been to a scipy conference before, I am not sure about the typical audience, so I have a hard time to gauge the interest. cheers, David From prabhu at aero.iitb.ac.in Fri May 30 00:23:17 2008 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Fri, 30 May 2008 09:53:17 +0530 Subject: [SciPy-user] Tutorials at Scipy 2008 In-Reply-To: <483F79FD.7030309@ar.media.kyoto-u.ac.jp> References: <483F79FD.7030309@ar.media.kyoto-u.ac.jp> Message-ID: <483F8135.9070703@aero.iitb.ac.in> David Cournapeau wrote: > Some people have asked me about numscons, but I don't know if an > interesting enough topic; build issues are pretty boring compared to the > proposed tutorials, and I guess most people interested in it know how to > go through it. I think this would be very useful so you have +2 from me. regards, prabhu From fperez.net at gmail.com Fri May 30 02:29:05 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 29 May 2008 23:29:05 -0700 Subject: [SciPy-user] Tutorials at Scipy 2008 In-Reply-To: <483F79FD.7030309@ar.media.kyoto-u.ac.jp> References: <483F79FD.7030309@ar.media.kyoto-u.ac.jp> Message-ID: On Thu, May 29, 2008 at 8:52 PM, David Cournapeau wrote: > Some people have asked me about numscons, but I don't know if an > interesting enough topic; build issues are pretty boring compared to the > proposed tutorials, and I guess most people interested in it know how to > go through it. > > As I haven't been to a scipy conference before, I am not sure about the > typical audience, so I have a hard time to gauge the interest. Definitely put it in! While build issues may appear secondary, they are actually in many ways one of the most critical barriers to adoption we have. Don't be shy, go ahead. Cheers, f From contact at pythonxy.com Fri May 30 17:07:59 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Fri, 30 May 2008 23:07:59 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 1.2.3 Message-ID: <48406CAF.9030804@pythonxy.com> Hi all, Python(x,y) 1.2.3 is now available on http://www.pythonxy.com. Hopefully, updates should now be released at a lower rate. This update is recommended for Python(x,y) 1.2.x users, as a bug showed up with the new PyQt release: zoom/pan mode in embedded matplotlib (0.91.2) widgets is very slow. So PyQt 4.3.3 is back on Python(x,y), as well as the associated Qt Eclipse integration plugin and Qt help files (Eclipse help browser). Anyway PyQt 4.4.2 new features were of no particular interest for purely scientific use. Please see other changes below. Changes history 05 -30 -2008 - Version 1.2.3 : * Corrected: o PyQt 4.4.2 --> PyQt 4.3.3 (performance bug in new version for embedded matplotlib widgets) o Qt Eclipse Integration 1.4.0 --> 1.0.1 (see above) o Windows installer bug: Documentation directory customization was not taken into account * Added: o Windows installer: Python(x,y) may be partially (i.e. without wxPython and Enthought Tool Suite) installed without any administrative privilege Regards, Pierre Raybaut