From dmitrey.kroshko at scipy.org Tue Jan 1 04:43:53 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Tue, 01 Jan 2008 11:43:53 +0200 Subject: [SciPy-user] Ann: OpenOpt v 0.15 (free optimization framework) Message-ID: <477A0B59.1020100@scipy.org> Hi Lev, openopt0.15.tar.gz is stable version, openopt.tar.gz is file containing latest changes (mentioned in openopt blog) for those who can't use svn. Regards, D Lev Givon wrote: > Why are there two tarballs available for download on > http://scipy.org/scipy/scikits/wiki/OpenOptInstall? They both seem to > contain the same version of the software. > > L.G. > > > From millman at berkeley.edu Tue Jan 1 20:10:01 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 1 Jan 2008 17:10:01 -0800 Subject: [SciPy-user] read_array problem In-Reply-To: <200712171812.11940.cscheit@lstm.uni-erlangen.de> References: <200712171812.11940.cscheit@lstm.uni-erlangen.de> Message-ID: On Dec 17, 2007 9:12 AM, Christoph Scheit wrote: > just for curiosity I have a question regarding read_array. > When I use the scipy.io read_array function I observe > some behaviour which I don't understand... Hey Chris, scipy.io.read_array is no longer supported. In the next release of scipy, it will be officially deprecated. Please take a look at numpy.loadtxt(), which has the same functionality with a slightly different syntax. The loadtxt docstring should provide detailed instructions for how to use it. If you have any questions about using numpy.loadtxt(), please let us know. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From Alexander.Dietz at astro.cf.ac.uk Wed Jan 2 11:35:00 2008 From: Alexander.Dietz at astro.cf.ac.uk (Alexander Dietz) Date: Wed, 2 Jan 2008 16:35:00 +0000 Subject: [SciPy-user] Usage of scipy KS test Message-ID: <9cf809a00801020835p79d5eaa5t6f9baf25985991fc@mail.gmail.com> Hi, I am trying to use the KS test implemented to scipy.stats, but nowhere I could find an example on how to use this function, for my purposes. Therefore let me describe what I have and what I want to do. I have three lists: x - vector of points on the x-axis y - vector of measured values for each of the x-points (cumulative distribution, first value:0.0, last value:1.0) m - vector containing values calculated from a model (cumulative distribution, first value:0.0, last value:1.0) Each list has the same length. Now I want to test the hypothesis, that both vectors y and m are from the same distribution ( or not from the same distribution). I would very appreciate if someone could send me a concrete example using the vectors y and m. Thanks Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Wed Jan 2 12:08:01 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 2 Jan 2008 12:08:01 -0500 Subject: [SciPy-user] Usage of scipy KS test In-Reply-To: <9cf809a00801020835p79d5eaa5t6f9baf25985991fc@mail.gmail.com> References: <9cf809a00801020835p79d5eaa5t6f9baf25985991fc@mail.gmail.com> Message-ID: On 02/01/2008, Alexander Dietz wrote: > I am trying to use the KS test implemented to scipy.stats, but nowhere I > could find an example on how to use this function, for my purposes. It is indeed unfortunate that the man page doesn't have an example. Here is one (in doctest format, I think, for easy inclusion into scipy): >>> import numpy >>> import scipy.stats >>> a = numpy.array([0.56957006, 0.81129082, 0.58896055, 0.63162055, 0.39305061, 0.92327368, 0.72176744, 0.69589162, 0.12716994, 0.80996302]) >>> scipy.stats.kstest(a, lambda x: x) (0.26957006, array(0.19655500176460927)) >>> scipy.stats.kstest(a**4, lambda x: x) (0.46678224511522154, array(0.0080924628974947677)) Let me explain: a was generated using numpy.random.uniform(size=10); as you can see, I hope, they are uniformly distributed. Each time scipy.stats.kstest it run, it returns two values: the KS D value (which is not very meaningful) and the probability that such a collection of values would be drawn from a distribution with a CDF given by the second argument. You can see that a is reasonably likely to have been drawn from a uniform distribution, but a**4 is not. > Therefore let me describe what I have and what I want to do. I have three > lists: > x - vector of points on the x-axis > y - vector of measured values for each of the x-points (cumulative > distribution, first value:0.0, last value:1.0) > m - vector containing values calculated from a model (cumulative > distribution, first value: 0.0, last value:1.0) > > Each list has the same length. Now I want to test the hypothesis, that both > vectors y and m are from the same distribution ( or not from the same > distribution). > > I would very appreciate if someone could send me a concrete example using > the vectors y and m. This format is more complicated than what we need. scipy.stats.kstest wants the list of (not necessarily sorted) x values, and a function that evaluates the CDF. The simplest thing to do is provide it your function that evaluates the CDF rather than computing m. If, however, you have already computed m, you can cheat: scipy.stats.kstest only needs to evaluate the function at the points in x, so you can create a function based on dictionary lookup: scipy.stats.kstest(x,dict(zip(x,m)).get) This should return a tuple containing the KS D value and the probability a data set like this one would be obtained from a probability distribution with your CDF. I should say, there's another mode scipy.stats.kstest can be used in: you can give it a random number generator and the CDF of the distribution it is supposed to generate, and it will see if the random number generator is (with a reasonable probability) functioning properly. Is nose testing extensible enough to be able to mark (with a decorator perhaps?) some tests as probabilistic, that is, a test which even a correct function has a small chance of failing? The standard idiom for such a test is to run it once, and if it fails run it again before reporting failure. Good luck, Anne From dmitrey.kroshko at scipy.org Wed Jan 2 14:28:40 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 02 Jan 2008 21:28:40 +0200 Subject: [SciPy-user] howto dot(a, b.T), some a or b coords are zeros Message-ID: <477BE5E8.6010109@scipy.org> hi all, I have 2 vectors a and b of shape (n, 1) (n can be 1...10^3, 10^4, may be more); some coords of a or b usually are zeros (or both a and b, but b is more often); getting matrix c = dot(a, b.T) is required (c.shape = (n,n)) What's the best way to speedup calculations (w/o using scipy, only numpy)? (I intend to use the feature to provide a minor enhancement for NLP/NSP ralg solver). Thank you in advance, D. From Alexander.Dietz at astro.cf.ac.uk Wed Jan 2 14:44:55 2008 From: Alexander.Dietz at astro.cf.ac.uk (Alexander Dietz) Date: Wed, 2 Jan 2008 19:44:55 +0000 Subject: [SciPy-user] Usage of scipy KS test In-Reply-To: References: <9cf809a00801020835p79d5eaa5t6f9baf25985991fc@mail.gmail.com> Message-ID: <9cf809a00801021144g1e00541k22daf990247d0b7e@mail.gmail.com> Hi, thanks a lot for the quick reply, but your suggestion does not seem to work. On Jan 2, 2008 5:08 PM, Anne Archibald wro > > > This format is more complicated than what we need. scipy.stats.kstest > wants the list of (not necessarily sorted) x values, and a function > that evaluates the CDF. The simplest thing to do is provide it your > function that evaluates the CDF rather than computing m. If, however, > you have already computed m, you can cheat: scipy.stats.kstest only > needs to evaluate the function at the points in x, so you can create a > function based on dictionary lookup: > > scipy.stats.kstest(x,dict(zip(x,m)).get) > > This should return a tuple containing the KS D value and the > probability a data set like this one would be obtained from a > probability distribution with your CDF. > > When I use your suggestion, I get an error: File "/usr/lib/python2.4/site-packages/scipy/stats/stats.py", line 1716, in kstest cdfvals = cdf(vals, *args) TypeError: unhashable type I tried with get(), but this also did not work. Also, in this example I do not see the vector 'm' containing the modeled values. They must enter somehow the expression.... Assumed, I calculate the D-value by myself. Can I then use stats.ksprob to calculate the probability? Do I have to use sqrt(n)*D as argument? Thanks Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Wed Jan 2 15:15:25 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 2 Jan 2008 15:15:25 -0500 Subject: [SciPy-user] Usage of scipy KS test In-Reply-To: <9cf809a00801021144g1e00541k22daf990247d0b7e@mail.gmail.com> References: <9cf809a00801020835p79d5eaa5t6f9baf25985991fc@mail.gmail.com> <9cf809a00801021144g1e00541k22daf990247d0b7e@mail.gmail.com> Message-ID: On 02/01/2008, Alexander Dietz wrote: > On Jan 2, 2008 5:08 PM, Anne Archibald wro > > scipy.stats.kstest(x,dict(zip(x,m)).get) > When I use your suggestion, I get an error: > > File > "/usr/lib/python2.4/site-packages/scipy/stats/stats.py", > line 1716, in kstest > cdfvals = cdf(vals, *args) > TypeError: unhashable type > > I tried with get(), but this also did not work. Also, in this example I do > not see the vector 'm' containing the modeled values. They must enter > somehow the expression.... Well, if x is the list of x values (floats) and m is the list of CDF values (also floats), then zip(x,m) is the list of pairs (x, CDF(x)). If you have arrays, you might need to convert them to lists first (x=list(x) for example). dict(zip(x,m)) makes a dictionary out of such a list of pairs. dict(zip(x,m)).get is a function that maps xs to ms. Unfortunately it only maps a single x to a single m; you need to use numpy.vectorize on it: scipy.stats.kstest(x,numpy.vectorize(dict(zip(x,m)).get)) numpy.vectorize makes it able to map an array of xs to an array of ms. That should work. But if you can, you should give kstest your real CDF-calculating function (possibly wrapped in numpy.vectorize, if it doesn't work on arrays). > Assumed, I calculate the D-value by myself. Can I then use stats.ksprob to > calculate the probability? Do I have to use sqrt(n)*D as argument? I'm not sure what ksprob wants. It will really be clearer to use kstest. I should warn you, if your probability distribution is not continuous - like, for example, a Poisson distribution - kstest will not work. Anne From Alexander.Dietz at astro.cf.ac.uk Wed Jan 2 15:24:22 2008 From: Alexander.Dietz at astro.cf.ac.uk (Alexander Dietz) Date: Wed, 2 Jan 2008 20:24:22 +0000 Subject: [SciPy-user] Usage of scipy KS test In-Reply-To: References: <9cf809a00801020835p79d5eaa5t6f9baf25985991fc@mail.gmail.com> <9cf809a00801021144g1e00541k22daf990247d0b7e@mail.gmail.com> Message-ID: <9cf809a00801021224u786232fau508cac480c8d08bb@mail.gmail.com> Hi, On Jan 2, 2008 8:15 PM, Anne Archibald wrote: > On 02/01/2008, Alexander Dietz wrote: > > > On Jan 2, 2008 5:08 PM, Anne Archibald wro > > > > scipy.stats.kstest(x,dict(zip(x,m)).get) > > > When I use your suggestion, I get an error: > > > > File > > "/usr/lib/python2.4/site-packages/scipy/stats/stats.py", > > line 1716, in kstest > > cdfvals = cdf(vals, *args) > > TypeError: unhashable type > > > > I tried with get(), but this also did not work. Also, in this example I > do > > not see the vector 'm' containing the modeled values. They must enter > > somehow the expression.... > > Well, if x is the list of x values (floats) and m is the list of CDF > values (also floats), then zip(x,m) is the list of pairs (x, CDF(x)). > If you have arrays, you might need to convert them to lists first > (x=list(x) for example). dict(zip(x,m)) makes a dictionary out of such > a list of pairs. dict(zip(x,m)).get is a function that maps xs to ms. > Unfortunately it only maps a single x to a single m; you need to use > numpy.vectorize on it: > > scipy.stats.kstest(x,numpy.vectorize(dict(zip(x,m)).get)) > > numpy.vectorize makes it able to map an array of xs to an array of ms. > That should work. But if you can, you should give kstest your real > CDF-calculating function (possibly wrapped in numpy.vectorize, if it > doesn't work on arrays). Sorry, mixed up two vectors. In the expression above you us the vectors x and m, but not y. See the following concrete example, which defines three vectors and a plot: x = numpy.asarray([ 0.089, 0.11, 0.161, 0.226, 0.257, 0.287, 0.31, 0.41, 0.438, 0.45,\ 0.547, 0.827, 1.13, 1.8 ]) y = numpy.asarray([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]) m = numpy.asarray([ 0.91405923 , 1.36472838, 2.94870517, 4.59609492, 5.37847868,\ 6.11545809 , 6.57990978, 8.56403531, 9.0550575, 9.20841591,\ 10.50502489, 12.50640372, 13.29624546, 13.64958435]) clf() plot( x, y) plot( x, m) savefig('test.png') My question: With what probability to the two lines match, i.e. what is the probability that both curves are (not) from the same distribution. Also; your example above still dod not work. Here is the error: File "/usr/lib/python2.4/site-packages/numpy/lib/function_base.py", line 799, in __init__ nin, ndefault = _get_nargs(pyfunc) File "/usr/lib/python2.4/site-packages/numpy/lib/function_base.py", line 756, in _get_nargs raise ValueError, 'failed to determine the number of arguments for %s' % (obj) ValueError: failed to determine the number of arguments for Thanks Alex > > Assumed, I calculate the D-value by myself. Can I then use stats.ksprobto > > calculate the probability? Do I have to use sqrt(n)*D as argument? > > I'm not sure what ksprob wants. It will really be clearer to use kstest. > > I should warn you, if your probability distribution is not continuous > - like, for example, a Poisson distribution - kstest will not work. > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sransom at nrao.edu Wed Jan 2 15:29:38 2008 From: sransom at nrao.edu (Scott Ransom) Date: Wed, 2 Jan 2008 15:29:38 -0500 Subject: [SciPy-user] Usage of scipy KS test In-Reply-To: References: <9cf809a00801020835p79d5eaa5t6f9baf25985991fc@mail.gmail.com> <9cf809a00801021144g1e00541k22daf990247d0b7e@mail.gmail.com> Message-ID: <200801021529.38881.sransom@nrao.edu> > > Assumed, I calculate the D-value by myself. Can I then use > > stats.ksprob to calculate the probability? Do I have to use > > sqrt(n)*D as argument? > > I'm not sure what ksprob wants. It will really be clearer to use > kstest. ksprob (which is the same as scipy.special.kolmogorov) expects sqrt(n)*D. This is useful if you determine D from your own routine for a two-sided KS-test, for example. Scott -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From wnbell at gmail.com Wed Jan 2 15:47:36 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 2 Jan 2008 14:47:36 -0600 Subject: [SciPy-user] howto dot(a, b.T), some a or b coords are zeros In-Reply-To: <477BE5E8.6010109@scipy.org> References: <477BE5E8.6010109@scipy.org> Message-ID: On Jan 2, 2008 1:28 PM, dmitrey wrote: > hi all, > I have 2 vectors a and b of shape (n, 1) (n can be 1...10^3, 10^4, may > be more); some coords of a or b usually are zeros (or both a and b, but > b is more often); getting matrix c = dot(a, b.T) is required (c.shape = > (n,n)) > > What's the best way to speedup calculations (w/o using scipy, only numpy)? > (I intend to use the feature to provide a minor enhancement for NLP/NSP > ralg solver). How much more expensive is dot(a,b.T) than zeros((n,n))? Is outer() any faster? What proportion of a and b are zero? You could remove all zeros from a and b, compute that outer product, and then paste the results back into an n by n matrix. I doubt this would be any faster though since the outerproduct doesn't do many FLOPs. I know you don't want to use scipy, but time the following: from scipy.sparse import * asp = csr_matrix(a) bsp = csr_matrix(b.T) c = asp * bsp # time this -- Nathan Bell wnbell at gmail.com From aisaac at american.edu Wed Jan 2 15:56:33 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 2 Jan 2008 15:56:33 -0500 Subject: [SciPy-user] howto dot(a, b.T), some a or b coords are zeros In-Reply-To: <477BE5E8.6010109@scipy.org> References: <477BE5E8.6010109@scipy.org> Message-ID: On Wed, 02 Jan 2008, dmitrey apparently wrote: > I have 2 vectors a and b of shape (n, 1) (n can be > 1...10^3, 10^4, may be more); some coords of a or > b usually are zeros (or both a and b, but b is more > often); getting matrix c = dot(a, b.T) is required > (c.shape = (n,n)) Are you sure you need c explicitly? How exactly is it used? (I.e., what is the algebraic expression in which c is used?) Cheers, Alan Isaac From ndbecker2 at gmail.com Wed Jan 2 22:04:57 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 02 Jan 2008 22:04:57 -0500 Subject: [SciPy-user] [newb] how to create arrays Message-ID: How would I create a vector of complex random variables? I'm thinking the best way is to create a complex vector, then assign to the real and imag parts (using e.g., random.standard_normal). I don't see any way to create an uninitialized array. I guess I'd have to use zeros? Is there any way to avoid the wasted time of initializing just to write over it? From fperez.net at gmail.com Wed Jan 2 22:09:42 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 2 Jan 2008 20:09:42 -0700 Subject: [SciPy-user] [newb] how to create arrays In-Reply-To: References: Message-ID: On Jan 2, 2008 8:04 PM, Neal Becker wrote: > How would I create a vector of complex random variables? > > I'm thinking the best way is to create a complex vector, then > assign to the real and imag parts (using e.g., random.standard_normal). > > I don't see any way to create an uninitialized array. I guess I'd have to > use zeros? Is there any way to avoid the wasted time of initializing just > to write over it? np.empty() cheers, f From peridot.faceted at gmail.com Thu Jan 3 02:56:05 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 3 Jan 2008 02:56:05 -0500 Subject: [SciPy-user] [newb] how to create arrays In-Reply-To: References: Message-ID: On 02/01/2008, Neal Becker wrote: > How would I create a vector of complex random variables? > > I'm thinking the best way is to create a complex vector, then > assign to the real and imag parts (using e.g., random.standard_normal). > > I don't see any way to create an uninitialized array. I guess I'd have to > use zeros? Is there any way to avoid the wasted time of initializing just > to write over it? You can create an uninitialized array if you want to. But, from your question, you may well be thinking about your problem in the wrong way. If all you want to do is store a whole bunch of values you compute in a loop, use a python list. Python lists are really quite efficient and convenient. The point of scipy is that it lets you operate on the whole vector at once: a = numpy.linspace(0,numpy.pi,1000) b = numpy.sin(a) print numpy.average(b) The main reason for this is conceptual clarity: you can start thinking of array operations as single steps in your program, allowing you to do more with the same size of program. Secondarily, using the numpy functions allows the looping to be done in compiled code, which is much faster than python code. If you are filling an array, element by element, with a loop in python, the time the python code takes to run will be much longer than the time spend initializing with zeros. Thus numpy.empty() gets much less use than you might expect. Look into using numpy functions - linspace, arange, zeros, ones, exp/sin/arctan/etc. to create your array. Good luck, Anne From wbaxter at gmail.com Thu Jan 3 03:29:36 2008 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 3 Jan 2008 17:29:36 +0900 Subject: [SciPy-user] [newb] how to create arrays In-Reply-To: References: Message-ID: On Jan 3, 2008 4:56 PM, Anne Archibald wrote: > The main reason for this is conceptual clarity: you can start thinking > of array operations as single steps in your program, allowing you to > do more with the same size of program. Secondarily, using the numpy > functions allows the looping to be done in compiled code, which is > much faster than python code. If you are filling an array, element by > element, with a loop in python, the time the python code takes to run > will be much longer than the time spend initializing with zeros. Thus > numpy.empty() gets much less use than you might expect. Look into > using numpy functions - linspace, arange, zeros, ones, > exp/sin/arctan/etc. to create your array. That's interesting. I'd put it almost exactly backwards. Efficiency is the #1 reason for arrays, and as a bonus it happens that *some* things are shorter and cleaner expressed as array operations. I've been back to using a compiled language for a stint and it's absolute bliss to just be able to write a blasted for-loop when I want to, without worrying about my program slowing to a crawl. As opposed to always having to think of some clever way to use the array machinery to achieve the same thing efficiently (and sometimes having to think of two or three different ways when the first one doesn't turn out to be fast enough). But I still fire up matplotlib and hack up some quick NumPy scripts whenever I want to take a look at the numbers coming out of my compiled code. :-) --bb From peridot.faceted at gmail.com Thu Jan 3 03:57:02 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 3 Jan 2008 03:57:02 -0500 Subject: [SciPy-user] [newb] how to create arrays In-Reply-To: References: Message-ID: On 03/01/2008, Bill Baxter wrote: > On Jan 3, 2008 4:56 PM, Anne Archibald wrote: > > The main reason for this is conceptual clarity: you can start thinking > > of array operations as single steps in your program, allowing you to > > do more with the same size of program. Secondarily, using the numpy > > functions allows the looping to be done in compiled code, which is > > much faster than python code. If you are filling an array, element by > > element, with a loop in python, the time the python code takes to run > > will be much longer than the time spend initializing with zeros. Thus > > numpy.empty() gets much less use than you might expect. Look into > > using numpy functions - linspace, arange, zeros, ones, > > exp/sin/arctan/etc. to create your array. > > That's interesting. I'd put it almost exactly backwards. Efficiency > is the #1 reason for arrays, and as a bonus it happens that *some* > things are shorter and cleaner expressed as array operations. I've > been back to using a compiled language for a stint and it's absolute > bliss to just be able to write a blasted for-loop when I want to, > without worrying about my program slowing to a crawl. As opposed to > always having to think of some clever way to use the array machinery > to achieve the same thing efficiently (and sometimes having to think > of two or three different ways when the first one doesn't turn out to > be fast enough). I think I differ in that rather than sweat over how to express something in an arrayish way, I'm perfectly willing to write a for loop in python (or use vectorize, which is equivalent). Sure it's slow, comparatively, but I want to get the code written first, and worry about fast-enough later. > But I still fire up matplotlib and hack up some quick NumPy scripts > whenever I want to take a look at the numbers coming out of my > compiled code. :-) Exactly what I mean. Except much of the code I write is closer to this - in the sense that it only ever needs to run to completion once - than it is to the problems of classical computer science (sweat blood finding the most efficient solution because it needs to run a zillion times). Anne From wnbell at gmail.com Thu Jan 3 04:14:00 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 3 Jan 2008 03:14:00 -0600 Subject: [SciPy-user] [newb] how to create arrays In-Reply-To: References: Message-ID: On Jan 3, 2008 2:29 AM, Bill Baxter wrote: > without worrying about my program slowing to a crawl. As opposed to > always having to think of some clever way to use the array machinery > to achieve the same thing efficiently (and sometimes having to think > of two or three different ways when the first one doesn't turn out to > be fast enough). Bah, vectorizing loops builds character :) -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From dmitrey.kroshko at scipy.org Thu Jan 3 04:30:06 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 03 Jan 2008 11:30:06 +0200 Subject: [SciPy-user] howto dot(a, b.T), some a or b coords are zeros In-Reply-To: References: <477BE5E8.6010109@scipy.org> Message-ID: <477CAB1E.3040205@scipy.org> Alan G Isaac wrote: > On Wed, 02 Jan 2008, dmitrey apparently wrote: > >> I have 2 vectors a and b of shape (n, 1) (n can be >> 1...10^3, 10^4, may be more); some coords of a or >> b usually are zeros (or both a and b, but b is more >> often); getting matrix c = dot(a, b.T) is required >> (c.shape = (n,n)) >> > > Are you sure you need c explicitly? > Yes > How exactly is it used? > (I.e., what is the algebraic expression > in which c is used?) > here's a part of ralg code: B is n x n matrix, g, g1, g2 are n x 1 vectors, w is real number ################### for i in xrange(p.maxIter): ... g2 = dot(B, g) g1 = dot(B, g2) ... g2 = g1 = g = dot(B, g1) B += dot(dot(B,g), g*w) <- here I intend to get some economy g = g2.copy() ################### For some initial iterations, when initial point is infeasible and max constraint derivative depends on some coords only (or feasible + objfun depends on some coords only), dot(B, g1) has lots of zeros (for example, n-1 zeros for box-bound constraints), as well as dot(B, g) or g*w. This cases are especially important to ralg-based solver nssolve. Regards, D. From wbaxter at gmail.com Thu Jan 3 04:38:54 2008 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 3 Jan 2008 18:38:54 +0900 Subject: [SciPy-user] [newb] how to create arrays In-Reply-To: References: Message-ID: On Jan 3, 2008 6:14 PM, Nathan Bell wrote: > On Jan 3, 2008 2:29 AM, Bill Baxter wrote: > > without worrying about my program slowing to a crawl. As opposed to > > always having to think of some clever way to use the array machinery > > to achieve the same thing efficiently (and sometimes having to think > > of two or three different ways when the first one doesn't turn out to > > be fast enough). > > Bah, vectorizing loops builds character :) Guess that explains all the characters around here. :-) Actually I find vectorizing puzzles to be quite fun as puzzles. As do many others it seems, given how quickly questions like "how do I do this efficiently" get answered around here. But sometimes I just want code to write itself -- and do so efficiently -- without me having to think. ;-) --bb From robert.kern at gmail.com Thu Jan 3 05:13:56 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 03 Jan 2008 04:13:56 -0600 Subject: [SciPy-user] [newb] how to create arrays In-Reply-To: References: Message-ID: <477CB564.9030203@gmail.com> Anne Archibald wrote: > On 02/01/2008, Neal Becker wrote: >> How would I create a vector of complex random variables? >> >> I'm thinking the best way is to create a complex vector, then >> assign to the real and imag parts (using e.g., random.standard_normal). >> >> I don't see any way to create an uninitialized array. I guess I'd have to >> use zeros? Is there any way to avoid the wasted time of initializing just >> to write over it? > > You can create an uninitialized array if you want to. But, from your > question, you may well be thinking about your problem in the wrong > way. If all you want to do is store a whole bunch of values you > compute in a loop, use a python list. In this case, he wouldn't be using a loop. from numpy import * z = empty([10], dtype=complex) z.real[:] = random.standard_normal(10) z.imag[:] = random.standard_normal(10) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndbecker2 at gmail.com Thu Jan 3 07:00:55 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 03 Jan 2008 07:00:55 -0500 Subject: [SciPy-user] [newb] how to create arrays References: <477CB564.9030203@gmail.com> Message-ID: Robert Kern wrote: > Anne Archibald wrote: >> On 02/01/2008, Neal Becker wrote: >>> How would I create a vector of complex random variables? >>> >>> I'm thinking the best way is to create a complex vector, then >>> assign to the real and imag parts (using e.g., random.standard_normal). >>> >>> I don't see any way to create an uninitialized array. I guess I'd have >>> to >>> use zeros? Is there any way to avoid the wasted time of initializing >>> just to write over it? >> >> You can create an uninitialized array if you want to. But, from your >> question, you may well be thinking about your problem in the wrong >> way. If all you want to do is store a whole bunch of values you >> compute in a loop, use a python list. > > In this case, he wouldn't be using a loop. > > from numpy import * > > z = empty([10], dtype=complex) > z.real[:] = random.standard_normal(10) > z.imag[:] = random.standard_normal(10) > This is what I had in mind. I guess there isn't something that could do this in a single step, something like: z = random.standard_normal(10) + i * random.standard_normal(10) From robert.kern at gmail.com Thu Jan 3 07:08:19 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 03 Jan 2008 06:08:19 -0600 Subject: [SciPy-user] [newb] how to create arrays In-Reply-To: References: <477CB564.9030203@gmail.com> Message-ID: <477CD033.9010104@gmail.com> Neal Becker wrote: > Robert Kern wrote: > >> Anne Archibald wrote: >>> On 02/01/2008, Neal Becker wrote: >>>> How would I create a vector of complex random variables? >>>> >>>> I'm thinking the best way is to create a complex vector, then >>>> assign to the real and imag parts (using e.g., random.standard_normal). >>>> >>>> I don't see any way to create an uninitialized array. I guess I'd have >>>> to >>>> use zeros? Is there any way to avoid the wasted time of initializing >>>> just to write over it? >>> You can create an uninitialized array if you want to. But, from your >>> question, you may well be thinking about your problem in the wrong >>> way. If all you want to do is store a whole bunch of values you >>> compute in a loop, use a python list. >> In this case, he wouldn't be using a loop. >> >> from numpy import * >> >> z = empty([10], dtype=complex) >> z.real[:] = random.standard_normal(10) >> z.imag[:] = random.standard_normal(10) >> > > This is what I had in mind. > > I guess there isn't something that could do this in a single step, something > like: > > z = random.standard_normal(10) + i * random.standard_normal(10) Sure: z = random.standard_normal(10) + 1j * random.standard_normal(10) There's a minor tradeoff wrt the temporary; if your arrays are large enough for you to be concerned about the overhead of zeros() over empty(), you might be concerned about it as well. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ndbecker2 at gmail.com Thu Jan 3 07:17:11 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 03 Jan 2008 07:17:11 -0500 Subject: [SciPy-user] [newb] how to create arrays References: <477CB564.9030203@gmail.com> <477CD033.9010104@gmail.com> Message-ID: Robert Kern wrote: > Neal Becker wrote: >> Robert Kern wrote: >> >>> Anne Archibald wrote: >>>> On 02/01/2008, Neal Becker wrote: >>>>> How would I create a vector of complex random variables? >>>>> >>>>> I'm thinking the best way is to create a complex vector, then >>>>> assign to the real and imag parts (using e.g., >>>>> random.standard_normal). >>>>> >>>>> I don't see any way to create an uninitialized array. I guess I'd >>>>> have to >>>>> use zeros? Is there any way to avoid the wasted time of initializing >>>>> just to write over it? >>>> You can create an uninitialized array if you want to. But, from your >>>> question, you may well be thinking about your problem in the wrong >>>> way. If all you want to do is store a whole bunch of values you >>>> compute in a loop, use a python list. >>> In this case, he wouldn't be using a loop. >>> >>> from numpy import * >>> >>> z = empty([10], dtype=complex) >>> z.real[:] = random.standard_normal(10) >>> z.imag[:] = random.standard_normal(10) >>> >> It seems this also works? z = empty([10], dtype=complex) z.real = random.standard_normal(10) z.imag = random.standard_normal(10) From aisaac at american.edu Thu Jan 3 09:36:13 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 3 Jan 2008 09:36:13 -0500 Subject: [SciPy-user] howto dot(a, b.T), some a or b coords are zeros In-Reply-To: <477CAB1E.3040205@scipy.org> References: <477BE5E8.6010109@scipy.org> <477CAB1E.3040205@scipy.org> Message-ID: On Thu, 03 Jan 2008, dmitrey apparently wrote: > here's a part of ralg code: > B is n x n matrix, g, g1, g2 are n x 1 vectors, w is real number > ################### > for i in xrange(p.maxIter): > ... > g2 = dot(B, g) > g1 = dot(B, g2) > ... > g2 = > g1 = > g = dot(B, g1) > B += dot(dot(B,g), g*w) <- here I intend to get some economy > g = g2.copy() > ################### > For some initial iterations, when initial point is infeasible and max > constraint derivative depends on some coords only (or feasible + objfun > depends on some coords only), dot(B, g1) has lots of zeros (for example, > n-1 zeros for box-bound constraints), as well as dot(B, g) or g*w. > This cases are especially important to ralg-based solver nssolve. Well when g1 has lots of zeros, you can index the nonzeros to shorten the computation, as I showed you last time. In the case you mention, this would be huge. You have in effect B = B + w*(B*(B*g1))*(g1.T *B.T) so exploiting the symmetry of g*g.T would imply other costs. Cheers, Alan Isaac From robert.kern at gmail.com Thu Jan 3 15:09:38 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 03 Jan 2008 14:09:38 -0600 Subject: [SciPy-user] [newb] how to create arrays In-Reply-To: References: <477CB564.9030203@gmail.com> <477CD033.9010104@gmail.com> Message-ID: <477D4102.40001@gmail.com> Neal Becker wrote: >>> Robert Kern wrote: >>>> from numpy import * >>>> >>>> z = empty([10], dtype=complex) >>>> z.real[:] = random.standard_normal(10) >>>> z.imag[:] = random.standard_normal(10) > > It seems this also works? > z = empty([10], dtype=complex) > z.real = random.standard_normal(10) > z.imag = random.standard_normal(10) Yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lev at columbia.edu Fri Jan 4 11:44:11 2008 From: lev at columbia.edu (Lev Givon) Date: Fri, 4 Jan 2008 11:44:11 -0500 Subject: [SciPy-user] illegal instruction error in scipy.linalg.decomp.svd In-Reply-To: <20071218003816.GE15380@localhost.cc.columbia.edu> References: <20071218003816.GE15380@localhost.cc.columbia.edu> Message-ID: <20080104164410.GA11663@localhost.cc.columbia.edu> Received from Lev Givon on Mon, Dec 17, 2007 at 07:38:18PM EST: > On a Pentium 4 Linux box running python 2.5.1, scipy 0.6.0, numpy > 1.0.4, and lapack 3.0, I recently noticed that > scipy.linalg.decomp.svd() fails (and causes python to crash) with an > "illegal instruction" error. A bit of debugging revealed that the > error occurs in the line > > lwork = calc_lwork.gesdd(gesdd.prefix,m,n,compute_uv)[1] > > in scipy/linalg/decomp.py. Interestingly, scipy 0.5.2.1 on the same > box (with all of the other software packages unchanged) does not > exhibit this problem. Moreover, when I install scipy 0.6.0 along with > all of the other packages on a Linux machine containing a Pentium D > CPU, I do not observe the problem either. > > Being that I am running Mandriva Linux, closed scipy bug 540 caught my > eye. I'm not sure how it could be related to the above problem, though > (and I also do not know what the lapack patch mentioned in the ticket > could have been - even though I have been maintaining Mandriva's > lapack lately :-). > > Thoughts? This problem is also present in the latest svn release of scipy (3781). L.G. From jre at enthought.com Fri Jan 4 15:48:55 2008 From: jre at enthought.com (J. Ryan Earl) Date: Fri, 04 Jan 2008 14:48:55 -0600 Subject: [SciPy-user] planet.scipy.org Message-ID: <477E9BB7.2000100@enthought.com> There was a problem with how the DNS was setup for this site, it has been corrected. There are no known outstanding DNS issues. Please tell me ASAP if you notice any problems. Thanks, J. Ryan Earl IT Administrator Enthought, Inc. From millman at berkeley.edu Fri Jan 4 16:00:14 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 4 Jan 2008 13:00:14 -0800 Subject: [SciPy-user] [Enthought-dev] planet.scipy.org In-Reply-To: <477E9BB7.2000100@enthought.com> References: <477E9BB7.2000100@enthought.com> Message-ID: On Jan 4, 2008 12:48 PM, J. Ryan Earl wrote: > There was a problem with how the DNS was setup for this site, it has > been corrected. Thanks for getting this fixed! -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From jelleferinga at gmail.com Fri Jan 4 18:54:09 2008 From: jelleferinga at gmail.com (jelle feringa) Date: Fri, 4 Jan 2008 23:54:09 +0000 (UTC) Subject: [SciPy-user] Mayavi2 installation woes (OS X 10.4.11, Python 2.5) References: <4769CCFD.4080301@gmail.com> Message-ID: Hi, I'm facing the same issue as William here. The error seems pretty precise: swig_sources() takes exactly 3 arguments (2 given) swig is known to be quite picky on version compatibility issues. might this be issue? i'm running swig 1.3.20 best, -jelle From jh at physics.ucf.edu Fri Jan 4 23:58:18 2008 From: jh at physics.ucf.edu (Joe Harrington) Date: Fri, 04 Jan 2008 23:58:18 -0500 Subject: [SciPy-user] SciPy conference dates Message-ID: scipy.org says: > SciPy Conference Save the Date (2007-12-28) The conference will be > held the week of August 19-24, 2008 - exact schedule TBD. This has probably been raised before, but the dates are in the first week of classes at many colleges and universities, particularly in the South. Other schools have orientations in which faculty must participate that week. This makes it essentially impossible for many of us to go. While there will be conflict for some people in any given week, it would seem that, given the number of academics here, a better week might exist, perhaps in mid-late September? This year's dates may be locked in at this point, but even if they are, I hope the organizers will consider the possibility for next year. Thanks, --jh-- From prabhu at aero.iitb.ac.in Sat Jan 5 05:04:43 2008 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Sat, 05 Jan 2008 15:34:43 +0530 Subject: [SciPy-user] Mayavi2 installation woes (OS X 10.4.11, Python 2.5) In-Reply-To: References: <4769CCFD.4080301@gmail.com> Message-ID: <477F563B.1060708@aero.iitb.ac.in> jelle feringa wrote: > I'm facing the same issue as William here. > > The error seems pretty precise: > swig_sources() takes exactly 3 arguments (2 given) Didn't Robert's advice help? cheers, prabhu From Karl.Landsteiner at gmx.de Sat Jan 5 06:31:58 2008 From: Karl.Landsteiner at gmx.de (Karl Landsteiner) Date: Sat, 05 Jan 2008 12:31:58 +0100 Subject: [SciPy-user] scipy build fails on OS X 10.5 Message-ID: <20080105113158.283980@gmx.net> Hi, I am trying to build and install scipy on my new MacBook running OS X 10.5 Leopard. Numpy 1.0.4 build and install worked without problem but scipy 0.6.0 build fails. here are my gcc and gfortran version numbers: gcc --version i686-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5465) gfortran --version GNU Fortran (GCC) 4.2.1 the relevant errors in the output of "python setup.py build" seem to be Could not locate executable g77 Could not locate executable f77 Could not locate executable f95 customize GnuFCompiler customize Gnu95FCompiler Couldn't match compiler version for 'GNU Fortran (GCC) 4.2.1\nCopyright (C) 2007 Free Software Foundation, Inc.\n\nGNU Fortran comes with NO WARRANTY, to the ex tent permitted by law.\nYou may redistribute copies of GNU Fortran\nunder the te rms of the GNU General Public License.\nFor more information about these matters , see the file named COPYING\n' customize G95FCompiler customize G95FCompiler customize G95FCompiler using build_clib running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize NAGFCompiler customize AbsoftFCompiler customize IbmFCompiler customize GnuFCompiler customize Gnu95FCompiler Couldn't match compiler version for 'GNU Fortran (GCC) 4.2.1\nCopyright (C) 2007 Free Software Foundation, Inc.\n\nGNU Fortran comes with NO WARRANTY, to the ex tent permitted by law.\nYou may redistribute copies of GNU Fortran\nunder the te rms of the GNU General Public License.\nFor more information about these matters , see the file named COPYING\n' customize G95FCompiler ... Thanks for all your help in advance! -Karl -- GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS. Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail From fredmfp at gmail.com Sat Jan 5 06:33:32 2008 From: fredmfp at gmail.com (fred) Date: Sat, 05 Jan 2008 12:33:32 +0100 Subject: [SciPy-user] Mayavi2 installation woes (OS X 10.4.11, Python 2.5) In-Reply-To: References: <4769CCFD.4080301@gmail.com> Message-ID: <477F6B0C.8000809@gmail.com> jelle feringa a ?crit : > swig is known to be quite picky on version compatibility issues. > might this be issue? > i'm running swig 1.3.20 IIRC, old versions are "buggy". Try with 1.3.33, for instance, it should work. My 2 cts. Cheers, -- http://scipy.org/FredericPetit From emanuele at relativita.com Sat Jan 5 10:36:17 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Sat, 05 Jan 2008 16:36:17 +0100 Subject: [SciPy-user] differences between numpy.linalg.cholesky and scipy.linalg.cholesky? Message-ID: <477FA3F1.40401@relativita.com> I'm using cholesky decomposition a lot and trying both numpy.linalgcholesky and scipy.linalg.cholesky on hearmitean positive definite matrix. Sometimes I get """: Matrix is not positive definite - Cholesky decomposition cannot be computed""" when using numpy's cholesky. No problems with scipy's cholesky. Why? Thanks in advance, Emanuele From jelleferinga at gmail.com Sat Jan 5 11:33:37 2008 From: jelleferinga at gmail.com (jelle) Date: Sat, 5 Jan 2008 16:33:37 +0000 (UTC) Subject: [SciPy-user] Mayavi2 installation woes (OS X 10.4.11, Python 2.5) References: <4769CCFD.4080301@gmail.com> <477F6B0C.8000809@gmail.com> Message-ID: > Try with 1.3.33, for instance, it should work. thanks a lot for the hint Fred! much appreciated From ennesnospam1 at aeroakustik.de Sun Jan 6 11:17:16 2008 From: ennesnospam1 at aeroakustik.de (ennesnospam1 at aeroakustik.de) Date: Sun, 6 Jan 2008 17:17:16 +0100 (CET) Subject: [SciPy-user] =?iso-8859-1?q?lpmn_-_python_crash_with_AMD_processo?= =?iso-8859-1?q?r?= Message-ID: <20080106161716.52DCECC0E4A@webgo24-server3.webgo24-server3.de> Hi all, I have got a strange problem using lpmn from scipy.special under Win XP (32bit). Everything works fine as long as I am on a machine with Intel (pentium, centrino) processor. On an AMD based machine the function lpmn crashes if either the first or second argument is <>0. >>> >>> from scipy import special >>> >>> special.lpmn(0,0,0.1) (array([[ 1.]]), array([[ 0.]])) >>> >>> special.lpmn(0,1,0.1) (PYTHON CRASH) The actual crash seems to happen inside the compiled module specfun.pyd. Does anyone have an idea how to deal with this? Many thanks in advance, Ennes From nwagner at iam.uni-stuttgart.de Sun Jan 6 11:52:31 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 06 Jan 2008 17:52:31 +0100 Subject: [SciPy-user] lpmn - python crash with AMD processor In-Reply-To: <20080106161716.52DCECC0E4A@webgo24-server3.webgo24-server3.de> References: <20080106161716.52DCECC0E4A@webgo24-server3.webgo24-server3.de> Message-ID: On Sun, 6 Jan 2008 17:17:16 +0100 (CET) ennesnospam1 at aeroakustik.de wrote: > Hi all, > I have got a strange problem using lpmn from >scipy.special under Win XP > (32bit). > Everything works fine as long as I am on a machine with >Intel (pentium, > centrino) processor. On an AMD based machine the >function lpmn crashes > if either the first or second argument is <>0. > >>>> >>> from scipy import special >>>> >>> special.lpmn(0,0,0.1) > (array([[ 1.]]), array([[ 0.]])) >>>> >>> special.lpmn(0,1,0.1) > (PYTHON CRASH) > > The actual crash seems to happen inside the compiled >module specfun.pyd. > > Does anyone have an idea how to deal with this? > > Many thanks in advance, > Ennes > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user I cannot reproduce your problem here (AMD processor). >>> from scipy import special >>> special.lpmn(0,1,0.1) (array([[ 1. , 0.1]]), array([[ 0., 1.]])) >>> special.lpmn(0,0,0.1) (array([[ 1.]]), array([[ 0.]])) >>> import scipy >>> scipy.__version__ '0.7.0.dev3780' >>> import os, sys >>> >>> >>> os.system("uname -a") Linux noname 2.6.18.8-0.7-default #1 SMP Tue Oct 2 17:21:08 UTC 2007 x86_64 x86_64 x86_64 GNU/Linux Nils From ennesnospam1 at aeroakustik.de Sun Jan 6 12:15:45 2008 From: ennesnospam1 at aeroakustik.de (ennesnospam1 at aeroakustik.de) Date: Sun, 6 Jan 2008 18:15:45 +0100 (CET) Subject: [SciPy-user] =?iso-8859-1?q?lpmn_-_python_crash_with_AMD_processo?= =?iso-8859-1?q?r?= Message-ID: <20080106171545.384E5CC0E28@webgo24-server3.webgo24-server3.de> > On Sun, 6 Jan 2008 17:17:16 +0100 (CET) > ennesnospam1 at aeroakustik.de wrote: > > Hi all, > > I have got a strange problem using lpmn from > >scipy.special under Win XP > > (32bit). > > Everything works fine as long as I am on a machine with > >Intel (pentium, > > centrino) processor. On an AMD based machine the > >function lpmn crashes > > if either the first or second argument is <>0. > > > >>>> >>> from scipy import special > >>>> >>> special.lpmn(0,0,0.1) > > (array([[ 1.]]), array([[ 0.]])) > >>>> >>> special.lpmn(0,1,0.1) > > (PYTHON CRASH) > > > > The actual crash seems to happen inside the compiled > >module specfun.pyd. > > > > Does anyone have an idea how to deal with this? > > > > Many thanks in advance, > > Ennes > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > I cannot reproduce your problem here (AMD processor). > > >>> from scipy import special > >>> special.lpmn(0,1,0.1) > (array([[ 1. , 0.1]]), array([[ 0., 1.]])) > >>> special.lpmn(0,0,0.1) > (array([[ 1.]]), array([[ 0.]])) > >>> import scipy > >>> scipy.__version__ > '0.7.0.dev3780' > >>> import os, sys > >>> > >>> > >>> os.system("uname -a") > Linux noname 2.6.18.8-0.7-default #1 SMP Tue Oct 2 > 17:21:08 UTC 2007 x86_64 x86_64 x86_64 GNU/Linux > > Nils I tried on eight different machines with different scipy versions <= 0.6.0, all under WinXP SP 2 (32 bit). Two had Intel processors -> everything fine. Six had AMDs (Athlon, Duron) -> segmentation fault. >From your post it seems that it works fine under Linux. Unfortunately Linux is no option for me (at least in this case). Ennes From matthieu.brucher at gmail.com Sun Jan 6 12:39:54 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 6 Jan 2008 18:39:54 +0100 Subject: [SciPy-user] lpmn - python crash with AMD processor In-Reply-To: <20080106171545.384E5CC0E28@webgo24-server3.webgo24-server3.de> References: <20080106171545.384E5CC0E28@webgo24-server3.webgo24-server3.de> Message-ID: > > I tried on eight different machines with different scipy versions <= > 0.6.0, all under WinXP SP 2 (32 bit). Two had Intel processors -> > everything fine. Six had AMDs (Athlon, Duron) -> segmentation fault. > >From your post it seems that it works fine under Linux. Unfortunately > Linux is no option for me (at least in this case). > What are the kind of processor you have and which binary did you take ? For Athlon XP processors, you cannot take the usual binary bacause it uses SSE2 instructions which are not supported by Athlon XP processors. You should take the binary for Pentium II (which does not support the SSE2 instruction set). Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From timmichelsen at gmx-topmail.de Sun Jan 6 19:03:53 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Mon, 07 Jan 2008 01:03:53 +0100 Subject: [SciPy-user] assignment of hours of day in time series In-Reply-To: References: Message-ID: <47816C69.3090007@gmx-topmail.de> Hello, my first post was sent nearly 2 weeks ago and may have gotten buried between the years. I therefore Cc it to the time series developers because they may know something on this. > I'd start with the Python built-in datetime module and see if that helps. As I see from the example (http://docs.python.org/lib/node85.html) the datetime objects are a basis for date and time formatting and calculations. I think they are being used internally by the module. But I am talking of a simple solution to adjust the input (read: start time of my time series) and the reporting. For a easier understanding I have put some random data below. The first table. Shows my input data which I also described in my previous email. I think may data sets from sensors will be in a format like mine. The last table is what I would like to get in my report. How would you as experienced scipy/time series user implement the code to get my desired output? I read the documentation with pydoc and didn't find any parameter to set 24:00 instead of 0:00 hrs. Thanks for your help, Timmie Example tables: table 1: original data date hour_of_day value 1-Feb-2004 1 247 1-Feb-2004 2 889 1-Feb-2004 3 914 1-Feb-2004 4 292 1-Feb-2004 5 183 1-Feb-2004 6 251 1-Feb-2004 7 953 1-Feb-2004 8 156 1-Feb-2004 9 991 1-Feb-2004 10 557 1-Feb-2004 11 581 1-Feb-2004 12 354 1-Feb-2004 13 485 1-Feb-2004 14 655 1-Feb-2004 15 862 1-Feb-2004 16 399 1-Feb-2004 17 598 1-Feb-2004 18 744 1-Feb-2004 19 445 1-Feb-2004 20 374 1-Feb-2004 21 168 1-Feb-2004 22 995 1-Feb-2004 23 943 1-Feb-2004 24 326 table 2: report with start hour set to hour 1 date hour_of_day value 1-Feb-2004 1:00 247 1-Feb-2004 2:00 889 1-Feb-2004 3:00 914 1-Feb-2004 4:00 292 1-Feb-2004 5:00 183 1-Feb-2004 6:00 251 1-Feb-2004 7:00 953 1-Feb-2004 8:00 156 1-Feb-2004 9:00 991 1-Feb-2004 10:00 557 1-Feb-2004 11:00 581 1-Feb-2004 12:00 354 1-Feb-2004 13:00 485 1-Feb-2004 14:00 655 1-Feb-2004 15:00 862 1-Feb-2004 16:00 399 1-Feb-2004 17:00 598 1-Feb-2004 18:00 744 1-Feb-2004 19:00 445 1-Feb-2004 20:00 374 1-Feb-2004 21:00 168 1-Feb-2004 22:00 995 1-Feb-2004 23:00 943 2-Feb-2004 0:00 326 table 3: report with start hour set to hour 1 date hour_of_day value 1-Feb-2004 0:00 247 1-Feb-2004 1:00 889 1-Feb-2004 2:00 914 1-Feb-2004 3:00 292 1-Feb-2004 4:00 183 1-Feb-2004 5:00 251 1-Feb-2004 6:00 953 1-Feb-2004 7:00 156 1-Feb-2004 8:00 991 1-Feb-2004 9:00 557 1-Feb-2004 10:00 581 1-Feb-2004 11:00 354 1-Feb-2004 12:00 485 1-Feb-2004 13:00 655 1-Feb-2004 14:00 862 1-Feb-2004 15:00 399 1-Feb-2004 16:00 598 1-Feb-2004 17:00 744 1-Feb-2004 18:00 445 1-Feb-2004 19:00 374 1-Feb-2004 20:00 168 1-Feb-2004 21:00 995 1-Feb-2004 22:00 943 1-Feb-2004 23:00 326 table 4: desired report output date hour_of_day value 1-Feb-2004 1:00 247 1-Feb-2004 2:00 889 1-Feb-2004 3:00 914 1-Feb-2004 4:00 292 1-Feb-2004 5:00 183 1-Feb-2004 6:00 251 1-Feb-2004 7:00 953 1-Feb-2004 8:00 156 1-Feb-2004 9:00 991 1-Feb-2004 10:00 557 1-Feb-2004 11:00 581 1-Feb-2004 12:00 354 1-Feb-2004 13:00 485 1-Feb-2004 14:00 655 1-Feb-2004 15:00 862 1-Feb-2004 16:00 399 1-Feb-2004 17:00 598 1-Feb-2004 18:00 744 1-Feb-2004 19:00 445 1-Feb-2004 20:00 374 1-Feb-2004 21:00 168 1-Feb-2004 22:00 995 1-Feb-2004 23:00 943 1-Feb-2004 24:00 326 From timmichelsen at gmx-topmail.de Sun Jan 6 19:06:12 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Mon, 07 Jan 2008 01:06:12 +0100 Subject: [SciPy-user] flagging no data in timeseries In-Reply-To: <200712201137.35658.pgmdevlist@gmail.com> References: <200712201137.35658.pgmdevlist@gmail.com> Message-ID: <47816CF4.5010309@gmx-topmail.de> > The following solutions should be equivalent: > > * use masked_where from maskedarray >>>> myvalues_ts_hourly = masked_where(myvalues_ts_hourly , -999) > > * Use indexing >>>> myvalues_ts_hourly[myvalues_ts_hourly==-999] = M.masked I tried both of your solutions and they'd run without problems. But there was nothing masked. All I got as output was: > -- What does that mean? Any further is greatly appreciated! Kind regards, Timmie From timmichelsen at gmx-topmail.de Sun Jan 6 19:38:59 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Mon, 07 Jan 2008 01:38:59 +0100 Subject: [SciPy-user] roadmap/plans for timeseries package Message-ID: Hello, after spending some time testing out the timeseries package for the use of time series analysis and visualisation I can really say that it already has a nice foundation for this kind of work. But there are things that I can see improved from by beginners point of view. I would like to name a few and then ask you (Pierre GM and Matt Knox) what further plans you have. Some of these had already been discussed briefly [1]. I am also interested because Mr. Millman meantioned something that timeseries will be taken out of sandbox to a place called scikit (what's that BTW?): [2] * a example data set for at least one year on a high temporal resolution: 15min or at least 1h. Having such a common data set one could set up tutorials examples and debug or ask questions easier because all will have the same (non-confidetial) data on the disk. * being able to access the datetime information for calculations: as I understand from my oberservation, the datetime information is not really a part of the array and therefore not really available as reference in a calculations. A example: One has to get rainfall intensity during early morning hours. For such a filter the information on the corresponding hours are neccessary. Is this already possible? * The maskedarray function is not really good documented when it come comes to some infomation behind what's in the docstrings. There are no examples how to mask and data in array based upon different criteria. I find it kinda hard to get the data in a datetime object in a clean manner. Well, that's all so far. I haven't gotten into plotting time series more than what's in the wiki. I guess it's mainly knowing some matplotlib. Can I already submit bug/feature requests for time series? Where? I am really interested in your plans because I will use timeseries a lot once I will have my basic needs [3] and troubles [4] sorted out. Kind regards, Timmie [1] Re: time series analysis - http://thread.gmane.org/gmane.comp.python.scientific.user/13949/focus=13962 [2] The end of the SciPy sandbox - http://jarrodmillman.blogspot.com/2007/12/end-of-scipy-sandbox.html [3] Re: assignment of hours of day in time series - http://thread.gmane.org/gmane.comp.python.scientific.user/14504/focus=14525 [4] Re: flagging no data in timeseries - http://thread.gmane.org/gmane.comp.python.scientific.user/14455 From dominique.orban at gmail.com Sun Jan 6 21:49:46 2008 From: dominique.orban at gmail.com (Dominique Orban) Date: Sun, 6 Jan 2008 21:49:46 -0500 Subject: [SciPy-user] Would anyone connect fortran constrained linear least squares solver to Python? In-Reply-To: <47728F81.10906@scipy.org> References: <4768311C.7030906@scipy.org> <476B7C19.1020003@scipy.org> <476FE3F4.3000802@scipy.org> <47728F81.10906@scipy.org> Message-ID: <8793ae6e0801061849v4909de45ma00aa95aac7c5a3d@mail.gmail.com> On Dec 26, 2007 12:29 PM, dmitrey wrote: > Hi all, > > I had noticed (from traffic statistics): lots of people are interested > in linear least squares problems (LLSP). However, scipy has only > unconstrained LAPACK dGELSS/sGELSS. > > Could anyone provide connection of the fortran-written solver to Python > (or connect it to scipy)? > > http://netlib3.cs.utk.edu/toms/587 > (that one can handle linear eq and ineq constraints) > > Then I would gladly provide connection of the one to scikits.openopt. Hi Dmitrey, Please note that NLPy now features a pure Python version of Michael Saunders' LSQR for unconstrained linear least-squares: http://nlpy.sf.net Regarding the Netlib Fortran code, can't we use f2py? Cheers, Dominique From mattknox_ca at hotmail.com Sun Jan 6 22:36:42 2008 From: mattknox_ca at hotmail.com (matt) Date: Mon, 7 Jan 2008 03:36:42 +0000 (UTC) Subject: [SciPy-user] assignment of hours of day in time series References: <47816C69.3090007@gmx-topmail.de> Message-ID: > table 4: > desired report output > date hour_of_day value > 1-Feb-2004 1:00 247 > 1-Feb-2004 2:00 889 > 1-Feb-2004 3:00 914 > 1-Feb-2004 4:00 292 > 1-Feb-2004 5:00 183 > 1-Feb-2004 6:00 251 > 1-Feb-2004 7:00 953 > 1-Feb-2004 8:00 156 > 1-Feb-2004 9:00 991 > 1-Feb-2004 10:00 557 > 1-Feb-2004 11:00 581 > 1-Feb-2004 12:00 354 > 1-Feb-2004 13:00 485 > 1-Feb-2004 14:00 655 > 1-Feb-2004 15:00 862 > 1-Feb-2004 16:00 399 > 1-Feb-2004 17:00 598 > 1-Feb-2004 18:00 744 > 1-Feb-2004 19:00 445 > 1-Feb-2004 20:00 374 > 1-Feb-2004 21:00 168 > 1-Feb-2004 22:00 995 > 1-Feb-2004 23:00 943 > 1-Feb-2004 24:00 326 > Hi Tim, sorry for not getting back to you earlier. You are correct in that I did miss the posting earlier. Since the time "24:00" doesn't actually exist (as far as I am aware anyway), you will have to rely on somewhat of a hack to get your desired output. Try this: >>> import timeseries as ts >>> series = ts.time_series(range(400, 430), start_date=ts.now('hourly')) >>> hours = ts.time_series(series.hour + 1, dates=series.dates) >>> hour_fmtfunc = lambda x : '%i:00' % x ts.Report(hours, series, datefmt='%d-%b-%Y', delim=' ', fmtfunc=[hour_fmtfunc, None])() 06-Jan-2008 23:00 400 06-Jan-2008 24:00 401 07-Jan-2008 1:00 402 07-Jan-2008 2:00 403 07-Jan-2008 3:00 404 07-Jan-2008 4:00 405 07-Jan-2008 5:00 406 07-Jan-2008 6:00 407 07-Jan-2008 7:00 408 07-Jan-2008 8:00 409 07-Jan-2008 9:00 410 07-Jan-2008 10:00 411 07-Jan-2008 11:00 412 07-Jan-2008 12:00 413 07-Jan-2008 13:00 414 07-Jan-2008 14:00 415 07-Jan-2008 15:00 416 07-Jan-2008 16:00 417 07-Jan-2008 17:00 418 07-Jan-2008 18:00 419 07-Jan-2008 19:00 420 07-Jan-2008 20:00 421 07-Jan-2008 21:00 422 07-Jan-2008 22:00 423 07-Jan-2008 23:00 424 07-Jan-2008 24:00 425 08-Jan-2008 1:00 426 08-Jan-2008 2:00 427 08-Jan-2008 3:00 428 08-Jan-2008 4:00 429 ================================= basically add one to the "hour" property of the series to get your desired output. - Matt From mattknox_ca at hotmail.com Sun Jan 6 23:07:06 2008 From: mattknox_ca at hotmail.com (matt) Date: Mon, 7 Jan 2008 04:07:06 +0000 (UTC) Subject: [SciPy-user] roadmap/plans for timeseries package References: Message-ID: > I am also interested because Mr. Millman meantioned something that timeseries > will be taken out of sandbox to a place called scikit (what's that BTW?) For the definition of what a scikit is, I suggest reading some of the recent threads on this topic. I'll refrain from providing you with my own definition as it is likely to be wrong :) Definitions aside, yes the timeseries module will probably be moving to a scikit at some point in the not too distant future. I'd kind of like the maskedarray module to finish moving into numpy before we make the timeseries module a scikit, but we may get kicked out of the sandbox before then, so we'll see. I haven't really spoke to Pierre about the details of this yet, but I suspect we'll probably start doing actual "releases" once we move to a scikit, and providing windows binaries. > * a example data set for at least one year on a high temporal resolution: > 15min or at least 1h. Having such a common data set one could set up > tutorials examples and debug or ask questions easier because all will > have the same (non-confidetial) data on the disk. There was talk about sample data being included in scipy a while ago, not sure if this ever got anywhere. But I agree that it is worthwhile, especially for a timeseries module. > * being able to access the datetime information for calculations: > as I understand from my oberservation, the datetime information is not > really a part of the array and therefore not really available as > reference in a calculations. A example: > One has to get rainfall intensity during early morning hours. For such a > filter the information on the corresponding hours are neccessary. Is > this already possible? If I understand you correctly, then yes, this is already possible. Observe... >>> import timeseries as ts >>> data = ts.time_series(range(100), start_date=ts.now('hourly')) >>> hours = data.hour >>> filtered_data = data[(hours < 7) & (hours > 3)] >>> filtered_data timeseries([ 6 7 8 30 31 32 54 55 56 78 79 80], dates = [07-Jan-2008 04:00 07-Jan-2008 05:00 07-Jan-2008 06:00 08-Jan-2008 04:00 08-Jan-2008 05:00 08-Jan-2008 06:00 09-Jan-2008 04:00 09-Jan-2008 05:00 09-Jan-2008 06:00 10-Jan-2008 04:00 10-Jan-2008 05:00 10-Jan-2008 06:00], freq = H) > * The maskedarray function is not really good documented when it come > comes to some infomation behind what's in the docstrings. There are no > examples how to mask and data in array based upon different criteria. I > find it kinda hard to get the data in a datetime object in a clean manner. documentation will get better eventually. Realistically, this is probably still the "early adopter" stage. Not sure what you mean by "find it kinda hard to get the data in a datetime object in a clean manner". Can you elaborate on that more? > Well, that's all so far. I haven't gotten into plotting time series more > than what's in the wiki. I guess it's mainly knowing some matplotlib. Yeah, I would recommend spending some time learning matplotlib first before doing timeseries plotting. Word of caution... the timeseries plotting stuff does not currently support frequencies higher than daily (eg. hourly, minutely, etc...). Support for these frequencies could be added without too much trouble, but just haven't got around to it yet. > > Can I already submit bug/feature requests for time series? > Where? Once it becomes a scikit there may be a more systematic way of doing this, but for now I would recommend just emailing Pierre or myself. - Matt From pgmdevlist at gmail.com Mon Jan 7 00:45:48 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 7 Jan 2008 00:45:48 -0500 Subject: [SciPy-user] roadmap/plans for timeseries package In-Reply-To: References: Message-ID: <200801070045.48572.pgmdevlist@gmail.com> > > Definitions aside, yes the timeseries module will probably be moving to a > scikit at some point in the not too distant future. I concur. > I haven't really spoke to Pierre about the details of this yet, but I > suspect we'll probably start doing actual "releases" once we move to a > scikit, and providing windows binaries. Yep. Cleaning up our sandbox on a monthly or every-other-month basis is quite feasible. > > * a example data set for at least one year on a high temporal resolution: > > 15min or at least 1h. Having such a common data set one could set up > > tutorials examples and debug or ask questions easier because all will > > have the same (non-confidetial) data on the disk. For hours, you have the 'hourly' frequency. For 15min, you have the 'minutely' frequency, from which you can select every other 15th point. > There was talk about sample data being included in scipy a while ago, not > sure if this ever got anywhere. But I agree that it is worthwhile, > especially for a timeseries module. At the same time, there's a nice page on the scipy wiki. Maybe we could use that as a basis. > > * being able to access the datetime information for calculations: > > as I understand from my oberservation, the datetime information is not > > really a part of the array and therefore not really available as > > reference in a calculations. Mmh, not sure what you mean here. The date/time info is as much part of the TimeSeries object as the mask in a masked array. Matt's example speaks for itself. If you think about it, that's the only real interest of the current version of timeseries: we don't have any real time series analysis functions yet (understand, finding the parameters of a ARIMA model, for example), but we're doing pretty good on indexing data with date/time information, and retrieving the data. > > * The maskedarray function is not really good documented when it come > > comes to some infomation behind what's in the docstrings. There are no > > examples how to mask and data in array based upon different criteria. I > > find it kinda hard to get the data in a datetime object in a clean > > manner. I'm afraid I'll side with Matt: could you be more explicit ? I have difficulties picturing what you mean. > > Can I already submit bug/feature requests for time series? > > Where? > > Once it becomes a scikit there may be a more systematic way of doing this, > but for now I would recommend just emailing Pierre or myself. Same advice. It's gonna be far faster. You can find our contact emails in every file (not ultra wise, yes, but eh, that's consumer service ;) From fperez.net at gmail.com Mon Jan 7 02:18:11 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 7 Jan 2008 00:18:11 -0700 Subject: [SciPy-user] Sage/Scipy Days 8 at Enthought: Feb 29/March 4 2008 Message-ID: Hi all, below is the full text of the announcement, which has also been posted here: http://wiki.sagemath.org/days8 Many thanks to Enthought for the generous support they've offered! We really look forward to this meeting being a great opportunity for collaboration between the Scipy and Sage teams. Cheers, Travis Oliphant William Stein Fernando Perez ================================ Sage/Scipy Days 8 at Enthought ================================ ------------------------------------------------------- Connecting Pure Mathematics With Scientific Computation ------------------------------------------------------- .. Contents:: .. 1 Introduction 2 Location 3 Costs and Funding 4 Contacts and further information 5 Preliminary agenda 5.1 Friday 29: Talks 5.2 Saturday 1: More talks/code planning/code 5.3 Sunday 2: Coding 5.4 Monday 3: Coding 5.5 Tuesday 4: Wrapup .. Introduction ============ The Sage_ and Scipy_ teams and `Enthought Inc.`_ are pleased to announce the first collaborative meeting for Sage/Scipy joint development, to be held from February 29 until March 4, 2007 at the Enthought headquarters in Austin, Texas. The purpose of this meeting is to gather developers for these projects in a friendly atmosphere for a few days of technical talks and active development work. It should be clear to those interested in attending that this is *not* an academic conference, but instead an opportunity for the two teams to find common points for collaboration, joint work, better integration between the projects and future directions. The focus of the workshop will be to actually *implement* such ideas, not just to plan for them. **Sage** is a Python-based system which aims at providing an open source, free alternative to existing propietary mathematical software and does so by integrating multiple open source projects, as well as providing its own native functionality in many areas. It includes by default the NumPy and SciPy packages. **NumPy and SciPy** are Python libraries whose focus is high-performance numerical computing, and they are widely accepted as the foundation of most Python-based scientific computing projects. **Enthought** is a scientific computing company that produces Python-based tools for many application-specific domains. Enthought has a strong commitment to open source development: it provides support and hosting for Numpy, Scipy, and many other Python scientific projects and many of its tools_ are freely available. The theme of the workshop is finding ways to best combine our strengths to create something that is significantly better than anything ever done so far. .. _Sage: http://sagemath.org .. _Scipy: http://scipy.org .. _`Enthought Inc.`: http://enthought.com .. _tools: http://code.enthought.com Location ======== The workshop will be held at the headquarters of `Enthought Inc.`_:: Suite 2100 515 Congress Ave. Austin, TX 78701 (512) 536-1057, voice (512) 536-1059, fax .. _`Enthought Inc.`: http://enthought.com Costs and Funding ================= We can accomodate a total of 30 attendees at Enthought's headquarters for the meeting. There is a $100 registration fee, which will be used to cover coffee, snacks and lunches for all the days of the meeting, plus one group dinner outing. Attendees can use the wiki_ to coordinate room/car rental sharing if they so desire. Thanks to Enthought's generous offer of support, we'll be able to cover the above costs for 15 attendees, in addition to offering them housing and transportation. Please note that housing will be provided at Enthought's personal residences, so remember to bring your clean pajamas. We are currently looking into the possibility of additional funding to cover the registration fee for all attendees, and will update the wiki accordingly if that becomes possible. If you plan on coming please email Fernando.Perez at colorado.edu to let us know of your intent so we can have a better idea of the total numbers. Please indicate if you could only come under the condition that you can be hosted. We will try to offer hosting to as many of the Sage and Scipy developers as possible, but if you can fund your own expenses, this may open a slot for someone with limited funds. If the total attendance is below 15, we will offer hosting to everyone. We will close registration for those requesting hosting by Sunday, February 3, 2008. If we actually fill up all 30 available slots we will announce it, otherwise you are free to attend by letting us know anytime before the meeting, though past Feb. 1 you will be required to pay the registration fee of $100. .. _wiki: http://wiki.sagemath.org/days8 Contacts and further information ================================ For further information, you can either contact one of the following people (in parenthesis we note the topic most likely to be relevant to them): - William Stein (Sage): wstein at gmail.org - Fernando Perez (Scipy): Fernando.Perez at colorado.edu - Travis Oliphant (Enthought): oliphant at enthought.com or you can go to our wiki_ for up to date details. Preliminary agenda ================== Friday 29: Talks ---------------- This is a rough cut of suggested topics, along with a few notes on possible details that might be of interest. The actual material in the talks will be up to the presenters, of course. Some of these topics might just become projects to work on rather than actual talks, if we don't have a speaker available but have interested parties who wish to focus on the problem. Speakers are asked to include a slide of where they see any chances for better collaboration between the various projects (to the best of their knowledge). There will be a note-taker during the day who will try to keep tabs on this information and will summarize it as starting material for the joint work panel discussion to be held on Saturday (FPerez volunteers for this task if needed). - Numpy internal architecture and type system. - Sage internal type system, with emphasis on its number type system. - A clarification of where the 'sage language' goes beyond python. Things like ``A\b`` are valid in the CLI but not the notebook. Scipy is pure python, so it would help the scipy team better understand the boundaries between the two. - Special methods used by Sage (foo._magical_sage_method_)? If some of these make sense, we might want to agree on common protocols for numpy/scipy/sage objects to honor. - Sage usage of numpy, sage.matrix vs numpy.arrays. Smoother integration of numpy arrays/sage matrices and vectors. - Extended precision LAPACK. Integration in numpy/sage. The extended precision work LAPACK work was done by Y. Hida and J. Demmel at UC Berkeley. - Distributed/Parallel computing: DSage, ipython, Brian Granger's work on Global arrays for NASA... - Scikits: these are 'toolkits' that use numpy/scipy and can contain GPL code (details of how these will work are being firmed up in the scipy lists, and will be settled by the workshop). Perhaps some of SAGE's library wrappers (like GMP, MPFR or GSL) could become scikits? - Cython: status (inclusion in py2.6?), overview, opportunities for better numpy integration and usage. - Enthought technologies: Traits, TVTK, Mayavi, Chaco, Envisage. - User interface collaboration: 'sage-lite'/pylab/ipython code sharing possibilities? Saturday 1: More talks/code planning/coding ------------------------------------------- 9-11 am: Any remaining talks that didn't fit on Friday. Only if needed. 11-12: panel for specific coding projects and ideas, spill over into lunch time. 12-1: lunch. Rest of day: start coding! Organize in teams according to the plans made earlier and code away... Sunday 2: Coding ---------------- Work on projects decided above. 5-6pm: brief (5-10 minutes) status updates from coding teams. Problems encountered, progress, suggestions for adjustment. Monday 3: Coding ---------------- Same as Sunday. Tuesday 4: Wrapup ----------------- 9-11 am: Wrapup sessions with summary from coding projects. 11-12 am: Panel discussion on future joint work options. Afternoon: anyone left around can continue to code! From ennesnospam1 at aeroakustik.de Mon Jan 7 08:26:15 2008 From: ennesnospam1 at aeroakustik.de (ennesnospam1 at aeroakustik.de) Date: Mon, 7 Jan 2008 14:26:15 +0100 (CET) Subject: [SciPy-user] =?iso-8859-1?q?lpmn_-_python_crash_with_AMD_processo?= =?iso-8859-1?q?r?= Message-ID: <20080107132615.58BC6CC0EA5@webgo24-server3.webgo24-server3.de> >> I tried on eight different machines with different scipy versions <= >> 0.6.0, all under WinXP SP 2 (32 bit). Two had Intel processors -> >> everything fine. Six had AMDs (Athlon, Duron) -> segmentation fault. >> From your post it seems that it works fine under Linux. Unfortunately >> Linux is no option for me (at least in this case). >> >What are the kind of processor you have and which binary did you take ? >For Athlon XP processors, you cannot take the usual binary bacause it uses >SSE2 instructions which are not supported by Athlon XP processors. You >should take the binary for Pentium II (which does not support the SSE2 >instruction set). On my present machine it is an AMD Sempron, but I had the same problem with other AMDs like AlthonXP and Duron. It is not very clear for me _what_ binary you mean. The specfun.pyd? I have no means of determining what version it has (I found version "0.0" in the crash report, but I guess that would be of no help) . Python itself and any function in scipy/numpy/ETS etc. works fine on all processors. My scipy install is from the scipy-0.6.0.0004_s-py2.5-win32.egg (which is not marked as exclusively built for Intel), but I found the same behaviour in earlier versions. Where do I find this PII version you mentioned? Ennes From corrada at cs.umass.edu Mon Jan 7 08:43:19 2008 From: corrada at cs.umass.edu (Andres Corrada-Emmanuel) Date: Mon, 07 Jan 2008 08:43:19 -0500 Subject: [SciPy-user] lpmn - python crash with AMD processor In-Reply-To: <20080107132615.58BC6CC0EA5@webgo24-server3.webgo24-server3.de> References: <20080107132615.58BC6CC0EA5@webgo24-server3.webgo24-server3.de> Message-ID: <47822C77.3080800@cs.umass.edu> I think he was referring to the binary version of the ATLAS library that you have installed on your system. Google for ATLAS pre-built binaries to find the place to go. ennesnospam1 at aeroakustik.de wrote: >>> I tried on eight different machines with different scipy versions <= >>> 0.6.0, all under WinXP SP 2 (32 bit). Two had Intel processors -> >>> everything fine. Six had AMDs (Athlon, Duron) -> segmentation fault. >>> From your post it seems that it works fine under Linux. Unfortunately >>> Linux is no option for me (at least in this case). >>> > >> What are the kind of processor you have and which binary did you take ? >> For Athlon XP processors, you cannot take the usual binary bacause it > uses >> SSE2 instructions which are not supported by Athlon XP processors. You >> should take the binary for Pentium II (which does not support the SSE2 >> instruction set). > > On my present machine it is an AMD Sempron, but I had the same problem > with other AMDs like AlthonXP and Duron. It is not very clear for me > _what_ binary you mean. The specfun.pyd? I have no means of determining > what version it has (I found version "0.0" in the crash report, but I > guess that would be of no help) . Python itself and any function in > scipy/numpy/ETS etc. works fine on all processors. > My scipy install is from the scipy-0.6.0.0004_s-py2.5-win32.egg (which > is not marked as exclusively built for Intel), but I found the same > behaviour in earlier versions. > > Where do I find this PII version you mentioned? > > Ennes > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- Andres Corrada-Emmanuel Research Fellow Aerial Imaging and Remote Sensing Lab Computer Science Department University of Massachusetts at Amherst Blog: www.corrada.com/blog From lfriedri at imtek.de Mon Jan 7 10:18:56 2008 From: lfriedri at imtek.de (Lars Friedrich) Date: Mon, 07 Jan 2008 16:18:56 +0100 Subject: [SciPy-user] scientific rounding Message-ID: <478242E0.1010908@imtek.de> Hello, when printing numerical data, rounding is often desirable. I would like to do something like 'relative rounding'. What I mean is the following: round(1.3) -> 1.0 round(1.3e-9) -> 0.0 What I would like to have is newFunction(1.3) -> 1.0 newFunction(1.3e-9) ->1.0e-9 So I would like the newFunction to do the rounding with respect to the order of magnitude. Of course, it would be easy to write such a function (dividing by the log of the number, rounding, multiplying again), but my question is, if such a function is already available in numpy or scipy. Thanks, Lars -- Dipl.-Ing. Lars Friedrich Photonic Measurement Technology Department of Microsystems Engineering -- IMTEK University of Freiburg Georges-K?hler-Allee 102 D-79110 Freiburg Germany phone: +49-761-203-7531 fax: +49-761-203-7537 room: 01 088 email: lars.friedrich at imtek.de From ennesnospam1 at aeroakustik.de Mon Jan 7 10:22:40 2008 From: ennesnospam1 at aeroakustik.de (ennesnospam1 at aeroakustik.de) Date: Mon, 7 Jan 2008 16:22:40 +0100 (CET) Subject: [SciPy-user] =?iso-8859-1?q?lpmn_-_python_crash_with_AMD_processo?= =?iso-8859-1?q?r?= Message-ID: <20080107152240.7CBE1CC0BDD@webgo24-server3.webgo24-server3.de> > >>> I tried on eight different machines with different scipy versions <= > >>> 0.6.0, all under WinXP SP 2 (32 bit). Two had Intel processors -> > >>> everything fine. Six had AMDs (Athlon, Duron) -> segmentation fault. > >>> From your post it seems that it works fine under Linux. Unfortunately > >>> Linux is no option for me (at least in this case). > >>> > > > >> What are the kind of processor you have and which binary did you take ? > >> For Athlon XP processors, you cannot take the usual binary bacause it > > uses > >> SSE2 instructions which are not supported by Athlon XP processors. You > >> should take the binary for Pentium II (which does not support the SSE2 > >> instruction set). > > > > On my present machine it is an AMD Sempron, but I had the same problem > > with other AMDs like AlthonXP and Duron. It is not very clear for me > > _what_ binary you mean. The specfun.pyd? I have no means of determining > > what version it has (I found version "0.0" in the crash report, but I > > guess that would be of no help) . Python itself and any function in > > scipy/numpy/ETS etc. works fine on all processors. > > My scipy install is from the scipy-0.6.0.0004_s-py2.5-win32.egg (which > > is not marked as exclusively built for Intel), but I found the same > > behaviour in earlier versions. > > > > Where do I find this PII version you mentioned? > > > I think he was referring to the binary version of the ATLAS library that > you have installed on your system. Google for ATLAS pre-built binaries > to find the place to go. I doubt this will do the trick - all other, esp. numpy works fine. And, from the source http://svn.scipy.org/svn/scipy/trunk/scipy/special/specfun/specfun.f I learn that there is no call to any foreign function from SUBROUTINE LPMN (this has not much to do with linear algebra). Thanks anyway, Ennes From lbolla at gmail.com Mon Jan 7 10:39:41 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Mon, 7 Jan 2008 16:39:41 +0100 Subject: [SciPy-user] scientific rounding In-Reply-To: <478242E0.1010908@imtek.de> References: <478242E0.1010908@imtek.de> Message-ID: <80c99e790801070739w5e69c3b0t54c2fd4c2c27ca39@mail.gmail.com> You can give set_printoptions a try: In [143]: numpy.set_printoptions(0) In [144]: print numpy.array([1.3, 1.3e-9]) [ 1e+00 1e-09] hth, L. On 1/7/08, Lars Friedrich wrote: > > Hello, > > when printing numerical data, rounding is often desirable. I would like > to do something like 'relative rounding'. What I mean is the following: > > round(1.3) -> 1.0 > round(1.3e-9) -> 0.0 > > What I would like to have is > > newFunction(1.3) -> 1.0 > newFunction(1.3e-9) ->1.0e-9 > > So I would like the newFunction to do the rounding with respect to the > order of magnitude. Of course, it would be easy to write such a function > (dividing by the log of the number, rounding, multiplying again), but my > question is, if such a function is already available in numpy or scipy. > > Thanks, > Lars > > -- > Dipl.-Ing. Lars Friedrich > > Photonic Measurement Technology > Department of Microsystems Engineering -- IMTEK > University of Freiburg > Georges-K?hler-Allee 102 > D-79110 Freiburg > Germany > > phone: +49-761-203-7531 > fax: +49-761-203-7537 > room: 01 088 > email: lars.friedrich at imtek.de > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Mon Jan 7 10:51:09 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 7 Jan 2008 16:51:09 +0100 Subject: [SciPy-user] lpmn - python crash with AMD processor In-Reply-To: <20080107152240.7CBE1CC0BDD@webgo24-server3.webgo24-server3.de> References: <20080107152240.7CBE1CC0BDD@webgo24-server3.webgo24-server3.de> Message-ID: > > I doubt this will do the trick - all other, esp. numpy works fine. And, > from the source > http://svn.scipy.org/svn/scipy/trunk/scipy/special/specfun/specfun.f I > learn that there is no call to any foreign function from SUBROUTINE > LPMN (this has not much to do with linear algebra). > If I'm wrong, then I'm wrong, but nothing you said proves that I'm wrong. Just trying with the second binaries on the scipy site won't take you much time. What you describe seems to be a problem with SSE and SSE2. It might not be, but if it is, you will know right away. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From ennesnospam1 at aeroakustik.de Mon Jan 7 11:33:56 2008 From: ennesnospam1 at aeroakustik.de (ennesnospam1 at aeroakustik.de) Date: Mon, 7 Jan 2008 17:33:56 +0100 (CET) Subject: [SciPy-user] =?iso-8859-1?q?lpmn_-_python_crash_with_AMD_processo?= =?iso-8859-1?q?r?= Message-ID: <20080107163356.4538CCC0C2D@webgo24-server3.webgo24-server3.de> >> >> I doubt this will do the trick - all other, esp. numpy works fine. And, >> from the source >> http://svn.scipy.org/svn/scipy/trunk/scipy/special/specfun/specfun.f I >> learn that there is no call to any foreign function from SUBROUTINE >> LPMN (this has not much to do with linear algebra). >> > >If I'm wrong, then I'm wrong, but nothing you said proves that I'm wrong. >Just trying with the second binaries on the scipy site won't take you much >time. What you describe seems to be a problem with SSE and SSE2. It might >not be, but if it is, you will know right away. > I am still not sure what second binaries you mean, but I replaced the installed egg with the scipy-0.6.0.win32-py2.5.exe binary install and I still get the same problem. I have installed scipy this way before with earlier versions too and had the same error. Btw I appreciate your help and my intention is far from proving you wrong. Greetings, Ennes From dmitrey.kroshko at scipy.org Mon Jan 7 12:17:31 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Mon, 07 Jan 2008 19:17:31 +0200 Subject: [SciPy-user] Would anyone connect fortran constrained linear least squares solver to Python? In-Reply-To: <8793ae6e0801061849v4909de45ma00aa95aac7c5a3d@mail.gmail.com> References: <4768311C.7030906@scipy.org> <476B7C19.1020003@scipy.org> <476FE3F4.3000802@scipy.org> <47728F81.10906@scipy.org> <8793ae6e0801061849v4909de45ma00aa95aac7c5a3d@mail.gmail.com> Message-ID: <47825EAB.4080400@scipy.org> Hi Dominique, openopt development is a little bit frozen for now, since I still search for a money support - I haven't got any since GSoC 2007 finish, but I try to remain active as long as possible (as my mentor Alan G Isaac recommended me), waiting for another one GSoC (according to rules I should remain being student at ~April 11 and I intend to do my graduation after the date, to take participation in GSoC 2008). So all I do is eating low-hanging fruits: either connecting well-documented solvers with python bridge already provided by someone, or doing some other changes (minor ones, I have no possibilities to start big jobs). Unfortunately, NLPy still lacks precise convenient documentation. Would you provide something like the one: http://projects.scipy.org/scipy/scikits/browser/trunk/openopt/scikits/openopt/examples/llsp_1.py then I could provide openopt binding for the solver. As for the netlib routine mentioned, of course I would gladly connect it would someone provide fortran-python bridge for the routine. Taking into account I have no experience of using f2py, SWIG, ctypes and other similar Python <-> xxx bridges, I have neither possibility no willing to spend my time and efforts for to provide it by myself. Moreover, OO is 100% python-written (to prevent possible cross-platform with installation) and fortran routines should go to scipy and/or other software already containing C/Fortran code. BTW, an OpenOpt user from Princeton informed me of willing to contribute something, and then we stopped at connecting IPOPT (to OO), so 2-3 month later (I hope) OO users will have IPOPT binding as well. Best regards, Dmitrey Dominique Orban wrote: > On Dec 26, 2007 12:29 PM, dmitrey wrote: > >> Hi all, >> >> I had noticed (from traffic statistics): lots of people are interested >> in linear least squares problems (LLSP). However, scipy has only >> unconstrained LAPACK dGELSS/sGELSS. >> >> Could anyone provide connection of the fortran-written solver to Python >> (or connect it to scipy)? >> >> http://netlib3.cs.utk.edu/toms/587 >> (that one can handle linear eq and ineq constraints) >> >> Then I would gladly provide connection of the one to scikits.openopt. >> > > Hi Dmitrey, > > Please note that NLPy now features a pure Python version of Michael > Saunders' LSQR for unconstrained linear least-squares: > http://nlpy.sf.net > > Regarding the Netlib Fortran code, can't we use f2py? > > Cheers, > Dominique > From dominique.orban at gmail.com Mon Jan 7 12:32:38 2008 From: dominique.orban at gmail.com (Dominique Orban) Date: Mon, 7 Jan 2008 12:32:38 -0500 Subject: [SciPy-user] Would anyone connect fortran constrained linear least squares solver to Python? In-Reply-To: <47825EAB.4080400@scipy.org> References: <476B7C19.1020003@scipy.org> <476FE3F4.3000802@scipy.org> <47728F81.10906@scipy.org> <8793ae6e0801061849v4909de45ma00aa95aac7c5a3d@mail.gmail.com> <47825EAB.4080400@scipy.org> Message-ID: <8793ae6e0801070932s27fe76cel617e8818facb3668@mail.gmail.com> Hi Dmitrey, On 1/7/08, dmitrey wrote: > Unfortunately, NLPy still lacks precise convenient documentation. Would > you provide something like the one: > http://projects.scipy.org/scipy/scikits/browser/trunk/openopt/scikits/openopt/examples/llsp_1.py > then I could provide openopt binding for the solver. In a sense, my situation is not unlike yours since I am not often able to spend much time writing documentation. However, a text document is not the only medium to communicate documentation. NLPy has demo code for almost all features in the Examples subfolder. Moreover, many modules can be executed (to run a basic test). Regarding LSQR, the default demo is http://nlpy.svn.sourceforge.net/viewvc/nlpy/trunk/nlpy/Examples/demo_lsqr.py?revision=68&view=markup and shows how to use the code. The function aprod() could be any function that computes a matrix-vector product. The module lsqr.py itself has a docstring explaining what problem it is trying to solve and how. It also gives references to papers, for those who are that motivated. Cheers, Dominique From dominique.orban at gmail.com Mon Jan 7 12:48:44 2008 From: dominique.orban at gmail.com (Dominique Orban) Date: Mon, 7 Jan 2008 12:48:44 -0500 Subject: [SciPy-user] differences between numpy.linalg.cholesky and scipy.linalg.cholesky? In-Reply-To: <477FA3F1.40401@relativita.com> References: <477FA3F1.40401@relativita.com> Message-ID: <8793ae6e0801070948o37f35a1eu4a57f3e2ff4d324d@mail.gmail.com> On 1/5/08, Emanuele Olivetti wrote: > I'm using cholesky decomposition a lot and trying both > numpy.linalgcholesky and scipy.linalg.cholesky on > hearmitean positive definite matrix. Sometimes I > get """: > Matrix is not positive definite - > Cholesky decomposition cannot be computed""" > when using numpy's cholesky. No problems with > scipy's cholesky. Why? Could you provide a small example? Dominique From lev at columbia.edu Mon Jan 7 13:12:24 2008 From: lev at columbia.edu (Lev Givon) Date: Mon, 7 Jan 2008 13:12:24 -0500 Subject: [SciPy-user] accessing contents of complex numpy array from pyrex Message-ID: <20080107181224.GO26687@localhost.cc.columbia.edu> Does anyone know how to access the elements of a 1-D numpy array containing complex values from within a function written in pyrex? Since pyrex currently doesn't provide a straightforward interface to complex data types, I was wondering whether there is some way to access the real and imaginary portions of each element in the data block of the array directly. L.G. From robert.kern at gmail.com Mon Jan 7 13:41:00 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 07 Jan 2008 12:41:00 -0600 Subject: [SciPy-user] accessing contents of complex numpy array from pyrex In-Reply-To: <20080107181224.GO26687@localhost.cc.columbia.edu> References: <20080107181224.GO26687@localhost.cc.columbia.edu> Message-ID: <4782723C.8040703@gmail.com> Lev Givon wrote: > Does anyone know how to access the elements of a 1-D numpy array > containing complex values from within a function written in pyrex? > Since pyrex currently doesn't provide a straightforward interface to > complex data types, I was wondering whether there is some way to > access the real and imaginary portions of each element in the data > block of the array directly. One way would be to define a my_complex struct in a .h file, expose it in Pyrex, and cast the data pointer to : #### my_complex.h typedef struct { double real, imag; } my_complex; #### EOF #### complex_arrays.pyx # E.g. from numpy/random/mtrand/numpy.pxi include "numpy.pxi" cdef extern from "my_complex.h": cdef struct my_complex: double real, imag def foo(numpy.ndarray complex_array): cdef my_complex* c_complex_array c_complex_array = PyArray_DATA(complex_array) # ... #### EOF Alternatively, you could get the .real and .imag arrays at the Python level, cast them as and access the data that way. Of course, these will be discontiguous, so you will have to take the stride into account. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dmitrey.kroshko at scipy.org Mon Jan 7 13:41:38 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Mon, 07 Jan 2008 20:41:38 +0200 Subject: [SciPy-user] Would anyone connect fortran constrained linear least squares solver to Python? In-Reply-To: <8793ae6e0801070932s27fe76cel617e8818facb3668@mail.gmail.com> References: <476B7C19.1020003@scipy.org> <476FE3F4.3000802@scipy.org> <47728F81.10906@scipy.org> <8793ae6e0801061849v4909de45ma00aa95aac7c5a3d@mail.gmail.com> <47825EAB.4080400@scipy.org> <8793ae6e0801070932s27fe76cel617e8818facb3668@mail.gmail.com> Message-ID: <47827262.20005@scipy.org> hi Dominique, I think the example is rather precise and convenient. However, I get error while building NLPy: make [[ ! -d /home/dmitrey/install/NLPy/nlpy/.objects ]] && mkdir /bin/sh: [[: not found make: *** [/home/dmitrey/install/NLPy/nlpy/.objects] Error 127 same error from restore-links I have avoided via restoring it in command prompt (I guess it would be better to write several files restoreLinksLinux, restoreLinksWindows etc). So, what should I do to avoid the error? Regards, D. Dominique Orban wrote: > In a sense, my situation is not unlike yours since I am not often able > to spend much time writing documentation. However, a text document is > not the only medium to communicate documentation. NLPy has demo code > for almost all features in the Examples subfolder. Moreover, many > modules can be executed (to run a basic test). Regarding LSQR, the > default demo is > > http://nlpy.svn.sourceforge.net/viewvc/nlpy/trunk/nlpy/Examples/demo_lsqr.py?revision=68&view=markup > > and shows how to use the code. The function aprod() could be any > function that computes a matrix-vector product. The module lsqr.py > itself has a docstring explaining what problem it is trying to solve > and how. It also gives references to papers, for those who are that > motivated. > > Cheers, > Dominique > From matthew.brett at gmail.com Mon Jan 7 14:12:26 2008 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 7 Jan 2008 11:12:26 -0800 Subject: [SciPy-user] lpmn - python crash with AMD processor In-Reply-To: <20080107163356.4538CCC0C2D@webgo24-server3.webgo24-server3.de> References: <20080107163356.4538CCC0C2D@webgo24-server3.webgo24-server3.de> Message-ID: <1e2af89e0801071112q70c2b9a5y83bd2996a7658542@mail.gmail.com> Hi, If you go to here: http://sourceforge.net/project/showfiles.php?group_id=1369&package_id=175103 you will see binaries with win32-p3 in their names. Try installing the numpy and scipy binaries with these names. Does that fix your problem? Matthew On Jan 7, 2008 8:33 AM, wrote: > >> > >> I doubt this will do the trick - all other, esp. numpy works fine. > And, > >> from the source > >> http://svn.scipy.org/svn/scipy/trunk/scipy/special/specfun/specfun.f > I > >> learn that there is no call to any foreign function from SUBROUTINE > >> LPMN (this has not much to do with linear algebra). > >> > > > >If I'm wrong, then I'm wrong, but nothing you said proves that I'm > wrong. > >Just trying with the second binaries on the scipy site won't take you > much > >time. What you describe seems to be a problem with SSE and SSE2. It > might > >not be, but if it is, you will know right away. > > > I am still not sure what second binaries you mean, but I replaced the > installed egg with the scipy-0.6.0.win32-py2.5.exe binary install and I > still get the same problem. I have installed scipy this way before with > earlier versions too and had the same error. > > Btw I appreciate your help and my intention is far from proving you > wrong. > > Greetings, > > Ennes > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From lev at columbia.edu Mon Jan 7 15:36:06 2008 From: lev at columbia.edu (Lev Givon) Date: Mon, 7 Jan 2008 15:36:06 -0500 Subject: [SciPy-user] accessing contents of complex numpy array from pyrex In-Reply-To: <4782723C.8040703@gmail.com> References: <20080107181224.GO26687@localhost.cc.columbia.edu> <4782723C.8040703@gmail.com> Message-ID: <20080107203602.GA11078@localhost.cc.columbia.edu> Received from Robert Kern on Mon, Jan 07, 2008 at 01:41:00PM EST: > Lev Givon wrote: > > Does anyone know how to access the elements of a 1-D numpy array > > containing complex values from within a function written in pyrex? > > Since pyrex currently doesn't provide a straightforward interface to > > complex data types, I was wondering whether there is some way to > > access the real and imaginary portions of each element in the data > > block of the array directly. > > One way would be to define a my_complex struct in a .h file, expose > it in Pyrex, and cast the data pointer to : > > (snip) Thanks; this did work. Being that I also needed to perform various operations on the array elements, I found it preferable to wrap the C complex type provided by complex.h with a struct and then define C macros for the various arithemtic/transcendental operations I needed. L.G. From roger.herikstad at gmail.com Mon Jan 7 21:03:56 2008 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Tue, 8 Jan 2008 10:03:56 +0800 Subject: [SciPy-user] 64 bit build on Mac OS 10.5 Message-ID: Hi all, I was wondering if anyone has successfully built scipy with 64 bit support on a Mac Pro running leopard? I would be most grateful if you could share your experience, especially what compiler/linker flags you're using? Thanks! ~ Roger -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeff.lyon at cox.net Mon Jan 7 22:08:03 2008 From: jeff.lyon at cox.net (Jeff Lyon) Date: Mon, 7 Jan 2008 20:08:03 -0700 Subject: [SciPy-user] 64 bit build on Mac OS 10.5 In-Reply-To: References: Message-ID: <80882257-26B6-4781-A4E6-E2691B1DF493@cox.net> I was able to build sciPy on a PowerMac G5 running Leopard 10.5.1 (9B18). I followed the standard installation instructions on the sciPy site except that the build failed using gcc 3.3. I switched back to gcc 4.0.1 and sciPy built & tested successfully. I'm not sure why this happened, but if it ain't broke, don't fix it. compiler details: $ gcc -v Using built-in specs. Target: powerpc-apple-darwin9 Configured with: /var/tmp/gcc/gcc-5465~16/src/configure --disable- checking -enable-werror --prefix=/usr --mandir=/share/man --enable- languages=c,objc,c++,obj-c++ --program-transform-name=/^[cg][^.-]*$/s/ $/-4.0/ --with-gxx-include-dir=/include/c++/4.0.0 --with-slibdir=/usr/ lib --build=i686-apple-darwin9 --program-prefix= --host=powerpc-apple- darwin9 --target=powerpc-apple-darwin9 Thread model: posix gcc version 4.0.1 (Apple Inc. build 5465) On Jan 7, 2008, at 7:03 PM, Roger Herikstad wrote: > Hi all, > I was wondering if anyone has successfully built scipy with 64 bit > support on a Mac Pro running leopard? I would be most grateful if > you could share your experience, especially what compiler/linker > flags you're using? Thanks! > > ~ Roger > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From roger.herikstad at gmail.com Tue Jan 8 02:33:46 2008 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Tue, 8 Jan 2008 15:33:46 +0800 Subject: [SciPy-user] 64 bit build on Mac OS 10.5 In-Reply-To: <80882257-26B6-4781-A4E6-E2691B1DF493@cox.net> References: <80882257-26B6-4781-A4E6-E2691B1DF493@cox.net> Message-ID: The standard instructions allowed me to build as well, but I was wondering if anyone has managed to take advantage of the new 64 bit architecture? After googling around, I still haven't found any good instruction as to how this could be done. I'm not sure how much of an effect it will have on the code itself, but it would be interesting to try it out.. ~ Roger On Jan 8, 2008 11:08 AM, Jeff Lyon wrote: > I was able to build sciPy on a PowerMac G5 running Leopard 10.5.1 (9B18). I > followed the standard installation instructions on the sciPy site except > that the build failed using gcc 3.3. I switched back to gcc 4.0.1 and > sciPy built & tested successfully. I'm not sure why this happened, but if it > ain't broke, don't fix it. > compiler details: > > $ gcc -v > Using built-in specs. > Target: powerpc-apple-darwin9 > Configured with: /var/tmp/gcc/gcc-5465~16/src/configure --disable-checking > -enable-werror --prefix=/usr --mandir=/share/man > --enable-languages=c,objc,c++,obj-c++ > --program-transform-name=/^[cg][^.-]*$/s/$/-4.0/ > --with-gxx-include-dir=/include/c++/4.0.0 --with-slibdir=/usr/lib > --build=i686-apple-darwin9 --program-prefix= --host=powerpc-apple-darwin9 > --target=powerpc-apple-darwin9 > Thread model: posix > gcc version 4.0.1 (Apple Inc. build 5465) > > > On Jan 7, 2008, at 7:03 PM, Roger Herikstad wrote: > > Hi all, I was wondering if anyone has successfully built scipy with 64 > bit support on a Mac Pro running leopard? I would be most grateful if you > could share your experience, especially what compiler/linker flags you're > using? Thanks! > > ~ Roger > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ennesnospam1 at aeroakustik.de Tue Jan 8 04:03:24 2008 From: ennesnospam1 at aeroakustik.de (ennesnospam1 at aeroakustik.de) Date: Tue, 8 Jan 2008 10:03:24 +0100 (CET) Subject: [SciPy-user] =?iso-8859-1?q?lpmn_-_python_crash_with_AMD_processo?= =?iso-8859-1?q?r?= Message-ID: <20080108090324.C915BCC0E87@webgo24-server3.webgo24-server3.de> > Hi, > > If you go to here: > > http://sourceforge.net/project/showfiles.php?group_id=1369&package_id=175103 > > you will see binaries with win32-p3 in their names. Try installing > the numpy and scipy binaries with these names. Does that fix your > problem? > > Matthew > Thank you very much indeed for pointing me to scipy-0.6.0.win32-p3-py2.5.exe. That brought a different version of specfun.pyd with it - no crash anymore, everything works fine. I don't know if this is right place to ask for it, but I think a note on "win-p3" (what is this ? Pentium III ? ) files on http://www.scipy.org/Download would be helpful. Again, thanks to you all Ennes From haase at msg.ucsf.edu Tue Jan 8 04:43:51 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue, 8 Jan 2008 10:43:51 +0100 Subject: [SciPy-user] 64 bit build on Mac OS 10.5 In-Reply-To: References: <80882257-26B6-4781-A4E6-E2691B1DF493@cox.net> Message-ID: On Jan 8, 2008 8:33 AM, Roger Herikstad wrote: > The standard instructions allowed me to build as well, but I was wondering > if anyone has managed to take advantage of the new 64 bit architecture? > After googling around, I still haven't found any good instruction as to how > this could be done. I'm not sure how much of an effect it will have on the > code itself, but it would be interesting to try it out.. > > ~ Roger > > > > On Jan 8, 2008 11:08 AM, Jeff Lyon wrote: > > > > I was able to build sciPy on a PowerMac G5 running Leopard 10.5.1 (9B18). > I followed the standard installation instructions on the sciPy site except > that the build failed using gcc 3.3. I switched back to gcc 4.0.1 and sciPy > built & tested successfully. I'm not sure why this happened, but if it ain't > broke, don't fix it. > > > > > > compiler details: > > > > > > > > $ gcc -v > > Using built-in specs. > > > > Target: powerpc-apple-darwin9 > > Configured with: /var/tmp/gcc/gcc-5465~16/src/configure --disable-checking > -enable-werror --prefix=/usr --mandir=/share/man > --enable-languages=c,objc,c++,obj-c++ > --program-transform-name=/^[cg][^.-]*$/s/$/- 4.0/ > --with-gxx-include-dir=/include/c++/4.0.0 --with-slibdir=/usr/lib > --build=i686-apple-darwin9 --program-prefix= --host=powerpc-apple-darwin9 > --target=powerpc-apple-darwin9 > > Thread model: posix > > gcc version 4.0.1 (Apple Inc. build 5465) > > > > > > > > > > > > > > > > > > On Jan 7, 2008, at 7:03 PM, Roger Herikstad wrote: > > > > > > > > > > Hi all, > > I was wondering if anyone has successfully built scipy with 64 bit > support on a Mac Pro running leopard? I would be most grateful if you could > share your experience, especially what compiler/linker flags you're using? > Thanks! > > I would like to rephrase and ask a more general question first: Does anyone have run any 64-bit python on Leopard !? 64-bit numpy for example would be quite interesting already ! Background: I read that Leopard is now -- much more than Tiger -- almost "completely 64 bit" (i.e. all it's libraries, and so on). Can someone here confirm this ? Furthermore, I read that the MacPython (2.5) however is still 32-bit. Can someone here confirm this ? Thanks - Sebastian Haase From matthieu.brucher at gmail.com Tue Jan 8 05:18:38 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 8 Jan 2008 11:18:38 +0100 Subject: [SciPy-user] lpmn - python crash with AMD processor In-Reply-To: <20080108090324.C915BCC0E87@webgo24-server3.webgo24-server3.de> References: <20080108090324.C915BCC0E87@webgo24-server3.webgo24-server3.de> Message-ID: > > Thank you very much indeed for pointing me to > scipy-0.6.0.win32-p3-py2.5.exe. That brought a different version of > specfun.pyd with it - no crash anymore, everything works fine. > > I don't know if this is right place to ask for it, but I think a note on > "win-p3" (what is this ? Pentium III ? ) files on > http://www.scipy.org/Download would be helpful. > So you had a problem with the SSE2 instructions ;) The binaries for Numpy are indicated but it is true that the binaries for Scipy should be added with the same explanation as well. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew at sel.cam.ac.uk Tue Jan 8 05:37:24 2008 From: matthew at sel.cam.ac.uk (Matthew Vernon) Date: Tue, 8 Jan 2008 10:37:24 +0000 Subject: [SciPy-user] 64 bit build on Mac OS 10.5 In-Reply-To: References: <80882257-26B6-4781-A4E6-E2691B1DF493@cox.net> Message-ID: <2702F0B2-FFCF-4ADA-ADB3-84FBF33306BA@sel.cam.ac.uk> Hi, On 8 Jan 2008, at 09:43, Sebastian Haase wrote: > I would like to rephrase and ask a more general question first: > Does anyone have run any 64-bit python on Leopard !? > 64-bit numpy for example would be quite interesting already ! I have a Leopard macpro, and the system-provided python is still 32-bit: b144-mcv1-mlt:~ matthew$ /usr/bin/python -c 'import sys ; print sys.maxint' 2147483647 You can build some 64-bit code on Leopard, but it's often quite painful (I can't get a decent R build, for example). The usual caveats are that you need to tell gcc et al to build 64-bit objects (they build 32-bit by default), and that anything that uses libtool will cause you great woe. I built a 64-bit version of python 2.5.1, using the following rune: CC='gcc -m64' LDFLAGS=-m64 ./configure --disable-toolbox-glue and installed it into /usr/local: b144-mcv1-mlt:~ matthew$ file /usr/local/bin/python /usr/local/bin/python: Mach-O 64-bit executable x86_64 b144-mcv1-mlt:~ matthew$ /usr/local/bin/python -c 'import sys ; print sys.maxint' 9223372036854775807 I've not tried building scipy, because I expect it would be rather non- trivial to get everything 64-bit correctly. To be honest, I found the whole business of trying to get a proper 64- bit computing environment on my Mac Pro running OSX too painful, so installed Debian Linux instead - that was a little fiddly, but not too bad, and the end result is so much better - everything comes 64-bit by default (and there is a scipy version installed). See: http://wiki.debian.org/DebianOnIntelMacPro HTH, Matthew -- Matthew Vernon MA VetMB LGSM MRCVS Farm Animal Epidemiology and Informatics Unit Department of Veterinary Medicine, University of Cambridge http://www.cus.cam.ac.uk/~mcv21/ From lorenzo.isella at gmail.com Tue Jan 8 05:38:01 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Tue, 8 Jan 2008 11:38:01 +0100 Subject: [SciPy-user] New Python Releases and SciPy Message-ID: Dear All, Maybe mine is a silly concern, but within a year/year and a half we may see two new Python releases: 2.6 and 3.0. It is already known that 3.0 will not be 100% backward compatible. I use Python only for scientific purposes and I mainly rely on SciPy, but I wonder if there is anything I should worry about, like being unable to run my own codes after upgrading to the 3.x series and having to apply some non-trivial fixes. Are my concerns groundless? Cheers Lorenzo From oliphant at enthought.com Tue Jan 8 10:50:21 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 08 Jan 2008 09:50:21 -0600 Subject: [SciPy-user] New Python Releases and SciPy In-Reply-To: References: Message-ID: <47839BBD.6060506@enthought.com> Lorenzo Isella wrote: > Dear All, > Maybe mine is a silly concern, but within a year/year and a half we > may see two new Python releases: 2.6 and 3.0. > It is already known that 3.0 will not be 100% backward compatible. I > use Python only for scientific purposes and I mainly rely on SciPy, > but I wonder if there is anything I should worry about, like being > unable to run my own codes after upgrading to the 3.x series and > having to apply some non-trivial fixes. > Are my concerns groundless? > Not really. The 2.6 release should not be much of a problem for either NumPy or SciPy. The transition to Python 3.0, will be a bit harder and I do not forsee that happening very quickly. Perhaps 6 months to 1 year, before NumPy and SciPy are released for Python 3.0 --- depending on how many 3.0 features we "support" -Travis O. From gael.varoquaux at normalesup.org Tue Jan 8 11:03:51 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 8 Jan 2008 17:03:51 +0100 Subject: [SciPy-user] New Python Releases and SciPy In-Reply-To: <47839BBD.6060506@enthought.com> References: <47839BBD.6060506@enthought.com> Message-ID: <20080108160351.GI5441@phare.normalesup.org> On Tue, Jan 08, 2008 at 09:50:21AM -0600, Travis E. Oliphant wrote: > Not really. The 2.6 release should not be much of a problem for either > NumPy or SciPy. The transition to Python 3.0, will be a bit harder and > I do not forsee that happening very quickly. Could you give us an update on the status of the buffer interface? Cheers, Ga?l From ndbecker2 at gmail.com Tue Jan 8 11:22:26 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 08 Jan 2008 11:22:26 -0500 Subject: [SciPy-user] scipy docs? Message-ID: Where can I find the latest scipy docs? From tjhnson at gmail.com Tue Jan 8 13:59:34 2008 From: tjhnson at gmail.com (Tom Johnson) Date: Tue, 8 Jan 2008 10:59:34 -0800 Subject: [SciPy-user] New Python Releases and SciPy In-Reply-To: <47839BBD.6060506@enthought.com> References: <47839BBD.6060506@enthought.com> Message-ID: On Jan 8, 2008 7:50 AM, Travis E. Oliphant wrote: > The transition to Python 3.0, will be a bit harder and > I do not forsee that happening very quickly. Perhaps 6 months to 1 > year, before NumPy and SciPy are released for Python 3.0 --- depending > on how many 3.0 features we "support" > Currently [1], 2008-08 is the targeted release for 3.0. As there will be developer releases, is scipy planning to have separate a branch dedicated to moving to 3.0? It seems that a transition can begin now, rather than after 3.0 is released, and the ideal situation (in dreamland) would be that scipy was near ready by the time 3.0 is released. Also, what is the overall plan with respect to 2.6 and 3.0? Will scipy be maintaining two branches, or will it be demanded that people upgrade to 3.0 when scipy upgrades to 3.0? [1] http://www.python.org/download/releases/3.0/ From jasperstolte at gmail.com Tue Jan 8 15:16:05 2008 From: jasperstolte at gmail.com (Jasper Stolte) Date: Tue, 8 Jan 2008 21:16:05 +0100 Subject: [SciPy-user] Control Systems Toolbox Message-ID: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> Hi guys, I'm new to the list, nice to meet you all. Anyway my question is: Is there already someone developing some sort of Control Systems Toolbox / Robust Control Toolbox equivalent for SciPy? I would love to see it added, and I am thinking of building something from scratch. Obviously that wouldn't make much sense if other people are already working on something similar. Regards, Jasper -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.mientki at ru.nl Tue Jan 8 15:58:08 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Tue, 08 Jan 2008 21:58:08 +0100 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> Message-ID: <4783E3E0.9040202@ru.nl> Jasper Stolte wrote: > Hi guys, > > I'm new to the list, nice to meet you all. Anyway my question is: Is > there already someone developing some sort of Control Systems Toolbox > / Robust Control Toolbox equivalent for SciPy? what's a "Control Systems Toolbox", any links to equivalent programs ? cheers, Stef Mientki From robert.kern at gmail.com Tue Jan 8 16:04:33 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 08 Jan 2008 15:04:33 -0600 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: <4783E3E0.9040202@ru.nl> References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> <4783E3E0.9040202@ru.nl> Message-ID: <4783E561.3040604@gmail.com> Stef Mientki wrote: > > Jasper Stolte wrote: >> Hi guys, >> >> I'm new to the list, nice to meet you all. Anyway my question is: Is >> there already someone developing some sort of Control Systems Toolbox >> / Robust Control Toolbox equivalent for SciPy? > what's a "Control Systems Toolbox", any links to equivalent programs ? http://www.mathworks.com/products/control/ http://www.mathworks.com/products/robust/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cohen at slac.stanford.edu Tue Jan 8 17:11:09 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Tue, 08 Jan 2008 14:11:09 -0800 Subject: [SciPy-user] read_array problem In-Reply-To: References: <200712171812.11940.cscheit@lstm.uni-erlangen.de> Message-ID: <4783F4FD.4050702@slac.stanford.edu> hi, I followed the advice and looked at the docstring : Help on function loadtxt in module numpy.core.numeric: loadtxt(fname, dtype=, comments='#', delimiter=None, converters=No ne, skiprows=0, usecols=None, unpack=False) Load ASCII data from fname into an array and return the array. The data must be regular, same number of values in every row fname can be a filename or a file handle. Support for gzipped files is automatic, if the filename ends in .gz See scipy.loadmat to read and write matfiles. .... Beyond the fact that it comes a bit as a surprise that numpy refers to loadmat from a distinct package (well, sometimes I wonder how distinct scipy and numpy are meant to be.... as a matter of fact help(scipy.loadtxt) will work as well and give you the same info), this scipy.loadmat does not seem to exist.... Maybe scipy.load was intended? best, Johann Jarrod Millman wrote: > On Dec 17, 2007 9:12 AM, Christoph Scheit wrote: > >> just for curiosity I have a question regarding read_array. >> When I use the scipy.io read_array function I observe >> some behaviour which I don't understand... >> > > Hey Chris, > > scipy.io.read_array is no longer supported. In the next release of > scipy, it will be officially deprecated. > > Please take a look at numpy.loadtxt(), which has the same > functionality with a slightly different syntax. The loadtxt docstring > should provide detailed instructions for how to use it. If you have > any questions about using numpy.loadtxt(), please let us know. > > Thanks, > > From elcorto at gmx.net Tue Jan 8 17:32:30 2008 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 08 Jan 2008 23:32:30 +0100 Subject: [SciPy-user] read_array problem In-Reply-To: <4783F4FD.4050702@slac.stanford.edu> References: <200712171812.11940.cscheit@lstm.uni-erlangen.de> <4783F4FD.4050702@slac.stanford.edu> Message-ID: <4783F9FE.1080205@gmx.net> Johann Cohen-Tanugi wrote: > hi, I followed the advice and looked at the docstring : > Help on function loadtxt in module numpy.core.numeric: > > loadtxt(fname, dtype=, comments='#', delimiter=None, > converters=No > ne, skiprows=0, usecols=None, unpack=False) > Load ASCII data from fname into an array and return the array. > > The data must be regular, same number of values in every row > > fname can be a filename or a file handle. Support for gzipped files is > automatic, if the filename ends in .gz > > See scipy.loadmat to read and write matfiles. > .... > > Beyond the fact that it comes a bit as a surprise that numpy refers to > loadmat from a distinct package (well, sometimes I wonder how distinct > scipy and numpy are meant to be.... as a matter of fact > help(scipy.loadtxt) will work as well and give you the same info), this > scipy.loadmat does not seem to exist.... Maybe scipy.load was intended? > best, > Johann > I think scipy.io.loadmat is correct. cheers, steve From s.mientki at ru.nl Tue Jan 8 18:15:02 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Wed, 09 Jan 2008 00:15:02 +0100 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> Message-ID: <478403F6.3000302@ru.nl> hi Jasper, you might be interested in the framework I'm building right now (will be released under BSD), Look here, and be sure to watch the demo at the bottom of the page first: http://oase.uci.kun.nl/~mientki/data_www/pylab_works/pw_animations_screenshots.html If you're interested, you can also contact me offline. cheers, Stef Mientki Jasper Stolte wrote: > Hi guys, > > I'm new to the list, nice to meet you all. Anyway my question is: Is > there already someone developing some sort of Control Systems Toolbox > / Robust Control Toolbox equivalent for SciPy? I would love to see it > added, and I am thinking of building something from scratch. Obviously > that wouldn't make much sense if other people are already working on > something similar. > > Regards, > Jasper > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Tue Jan 8 18:24:40 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 8 Jan 2008 17:24:40 -0600 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: <478403F6.3000302@ru.nl> References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> <478403F6.3000302@ru.nl> Message-ID: I have started a very basic one and will keep working on it this semester as I teach a controls class. I have defined a TransferFunction class (sort of derived from signal.lti), given it methods for basic block diagram algebra, root locus, and Bode control design and some c2d stuff. I am planning to release it under a BSD license. It is a bit messy right now. It is attached along with another file it depends on. (I don't think they depend on anythng else...) Let me know if you find this useful and want to be informed of updates, or if you think we can collaborate in some useful way. Ryan On Jan 8, 2008 5:15 PM, Stef Mientki wrote: > hi Jasper, > > you might be interested in the framework I'm building right now (will be > released under BSD), > Look here, and be sure to watch the demo at the bottom of the page first: > > http://oase.uci.kun.nl/~mientki/data_www/pylab_works/pw_animations_screenshots.html > If you're interested, you can also contact me offline. > > cheers, > Stef Mientki > > > Jasper Stolte wrote: > > Hi guys, > > > > I'm new to the list, nice to meet you all. Anyway my question is: Is > > there already someone developing some sort of Control Systems Toolbox > > / Robust Control Toolbox equivalent for SciPy? I would love to see it > > added, and I am thinking of building something from scratch. Obviously > > that wouldn't make much sense if other people are already working on > > something similar. > > > > Regards, > > Jasper > > ------------------------------------------------------------------------ > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- A non-text attachment was scrubbed... Name: controls.py Type: text/x-python Size: 27762 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signal_processing.py Type: text/x-python Size: 1469 bytes Desc: not available URL: From ryanlists at gmail.com Tue Jan 8 18:27:16 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 8 Jan 2008 17:27:16 -0600 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> <478403F6.3000302@ru.nl> Message-ID: p.s. I will have to keep working on this, add documentation, and make it nicely redistributeable for the sake of my students in the coming weeks. On Jan 8, 2008 5:24 PM, Ryan Krauss wrote: > I have started a very basic one and will keep working on it this > semester as I teach a controls class. I have defined a > TransferFunction class (sort of derived from signal.lti), given it > methods for basic block diagram algebra, root locus, and Bode control > design and some c2d stuff. I am planning to release it under a BSD > license. It is a bit messy right now. It is attached along with > another file it depends on. (I don't think they depend on anythng > else...) > > Let me know if you find this useful and want to be informed of > updates, or if you think we can collaborate in some useful way. > > Ryan > > > > On Jan 8, 2008 5:15 PM, Stef Mientki wrote: > > hi Jasper, > > > > you might be interested in the framework I'm building right now (will be > > released under BSD), > > Look here, and be sure to watch the demo at the bottom of the page first: > > > > http://oase.uci.kun.nl/~mientki/data_www/pylab_works/pw_animations_screenshots.html > > If you're interested, you can also contact me offline. > > > > cheers, > > Stef Mientki > > > > > > Jasper Stolte wrote: > > > Hi guys, > > > > > > I'm new to the list, nice to meet you all. Anyway my question is: Is > > > there already someone developing some sort of Control Systems Toolbox > > > / Robust Control Toolbox equivalent for SciPy? I would love to see it > > > added, and I am thinking of building something from scratch. Obviously > > > that wouldn't make much sense if other people are already working on > > > something similar. > > > > > > Regards, > > > Jasper > > > ------------------------------------------------------------------------ > > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From millman at berkeley.edu Tue Jan 8 19:01:09 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 8 Jan 2008 16:01:09 -0800 Subject: [SciPy-user] read_array problem In-Reply-To: <4783F4FD.4050702@slac.stanford.edu> References: <200712171812.11940.cscheit@lstm.uni-erlangen.de> <4783F4FD.4050702@slac.stanford.edu> Message-ID: On Jan 8, 2008 2:11 PM, Johann Cohen-Tanugi wrote: > Beyond the fact that it comes a bit as a surprise that numpy refers to > loadmat from a distinct package (well, sometimes I wonder how distinct > scipy and numpy are meant to be.... as a matter of fact > help(scipy.loadtxt) will work as well and give you the same info), this > scipy.loadmat does not seem to exist.... Maybe scipy.load was intended? NumPy provides basic matrix support (plus a little extra for historical reasons). We are reorganize the I/O code to better reflect this. Basic array reading/writing support is in NumPy (e.g., you just need to read an array from a text file). If you want to add support for reading/writing arrays in any number of other standard formats (e.g., Matlab, Excel, etc.), then you can use SciPy. I think that it is helpful for docstrings to refer to useful, recommended code even if you may need to install additional packages. For example, some functions in SciPy refer to Scikits. NumPy/SciPy/Scikits are all in some sense part of the same project. scipy.loadtxt just calls numpy.loadtxt. I have updated the docstring to refer to scipy.io.loadmat. Thanks for catching the typo. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From laytonjb at charter.net Tue Jan 8 21:46:51 2008 From: laytonjb at charter.net (laytonjb at charter.net) Date: Tue, 8 Jan 2008 18:46:51 -0800 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: <4783E561.3040604@gmail.com> Message-ID: <20080108214651.HIXDS.736990.root@fepweb14> ---- Robert Kern wrote: > Stef Mientki wrote: > > > > Jasper Stolte wrote: > >> Hi guys, > >> > >> I'm new to the list, nice to meet you all. Anyway my question is: Is > >> there already someone developing some sort of Control Systems Toolbox > >> / Robust Control Toolbox equivalent for SciPy? I'm not sure this is relevant, but I developed a control toolbox a long time ago for RLaB. I've got the code somewhere. RLaB is fairly close to Matlab if someone wants to use it as the basis for a Control Toolbox for Scipy (It was released under a GPL license, but I can change that if necessary :) ). Jeff From emanuele at relativita.com Wed Jan 9 02:47:48 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Wed, 09 Jan 2008 08:47:48 +0100 Subject: [SciPy-user] differences between numpy.linalg.cholesky and scipy.linalg.cholesky? In-Reply-To: <8793ae6e0801070948o37f35a1eu4a57f3e2ff4d324d@mail.gmail.com> References: <477FA3F1.40401@relativita.com> <8793ae6e0801070948o37f35a1eu4a57f3e2ff4d324d@mail.gmail.com> Message-ID: <47847C24.5020500@relativita.com> Dominique Orban wrote: > On 1/5/08, Emanuele Olivetti wrote: > >> I'm using cholesky decomposition a lot and trying both >> numpy.linalgcholesky and scipy.linalg.cholesky on >> hearmitean positive definite matrix. Sometimes I >> get """: >> Matrix is not positive definite - >> Cholesky decomposition cannot be computed""" >> when using numpy's cholesky. No problems with >> scipy's cholesky. Why? >> > > Could you provide a small example? > > Currently it is not so easy to provide a simple example. I'm using Cholesky factorization on 1000x1000 matrix generated from some large datasets and getting the error few times. But I'll try to generate one small soon. Looking in numpy/scipy source code I see that: - numpy.linalg.cholesky wraps the fortran function "dpotrf" - scipy.linalg.decomp wraps "potrf" Which is the difference between 'dpotrf' and 'potrf' ? Thanks, Emanuele From robert.kern at gmail.com Wed Jan 9 02:58:19 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 09 Jan 2008 01:58:19 -0600 Subject: [SciPy-user] differences between numpy.linalg.cholesky and scipy.linalg.cholesky? In-Reply-To: <47847C24.5020500@relativita.com> References: <477FA3F1.40401@relativita.com> <8793ae6e0801070948o37f35a1eu4a57f3e2ff4d324d@mail.gmail.com> <47847C24.5020500@relativita.com> Message-ID: <47847E9B.7080107@gmail.com> Emanuele Olivetti wrote: > Looking in numpy/scipy source code I see that: > - numpy.linalg.cholesky wraps the fortran function "dpotrf" > - scipy.linalg.decomp wraps "potrf" > > Which is the difference between 'dpotrf' and 'potrf' ? Most LAPACK subroutines have variants for each type specified by a single-letter prefix in front of the subroutine name. S = float32 D = float64 C = complex64 Z = complex128 One of the features of f2py, which is used to make scipy.linalg.decomp but not numpy.linalg, is that it can have templated interface definitions so one declaration will wrap all four variants. If you call scipy.linalg.cholesky with a float64 array, it will ultimately call DPOTRF, too. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jasperstolte at gmail.com Wed Jan 9 04:05:00 2008 From: jasperstolte at gmail.com (Jasper Stolte) Date: Wed, 9 Jan 2008 10:05:00 +0100 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: <478403F6.3000302@ru.nl> References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> <478403F6.3000302@ru.nl> Message-ID: <89198da10801090105h68e27db2o8bdb34f547107fdf@mail.gmail.com> Hi Stef, I'm certainly interested in your framework, and enjoyed watching it grow over the last months in the wx-Python mailing list. However, I dont think it is appropriate to include it into a core toolbox of SciPy, which should be about the science, not the UI / framework. I'll certainly keep it in mind for development of higher level applications, maybe even a loop shaping desing tool or something. Greetz, Jasper On 1/9/08, Stef Mientki wrote: > > hi Jasper, > > you might be interested in the framework I'm building right now (will be > released under BSD), > Look here, and be sure to watch the demo at the bottom of the page first: > > > http://oase.uci.kun.nl/~mientki/data_www/pylab_works/pw_animations_screenshots.html > If you're interested, you can also contact me offline. > > cheers, > Stef Mientki > > Jasper Stolte wrote: > > Hi guys, > > > > I'm new to the list, nice to meet you all. Anyway my question is: Is > > there already someone developing some sort of Control Systems Toolbox > > / Robust Control Toolbox equivalent for SciPy? I would love to see it > > added, and I am thinking of building something from scratch. Obviously > > that wouldn't make much sense if other people are already working on > > something similar. > > > > Regards, > > Jasper > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasperstolte at gmail.com Wed Jan 9 04:12:44 2008 From: jasperstolte at gmail.com (Jasper Stolte) Date: Wed, 9 Jan 2008 10:12:44 +0100 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> <478403F6.3000302@ru.nl> Message-ID: <89198da10801090112u4cb8cc2o7636d63390b264aa@mail.gmail.com> Ryan: Excellent, maybe it's best then if I wait for a few weeks for all your crazy last-minute modifications to settle, and start working with you to make this into a full-fledged control systems toolbox? Jeff: If you are willing to release the code under BSD or MIT licence, I think your toolbox is extremely relevant. Could send a link to it via the mailing list? Greetz, Jasper On 1/9/08, Ryan Krauss wrote: > > I have started a very basic one and will keep working on it this > semester as I teach a controls class. I have defined a > TransferFunction class (sort of derived from signal.lti), given it > methods for basic block diagram algebra, root locus, and Bode control > design and some c2d stuff. I am planning to release it under a BSD > license. It is a bit messy right now. It is attached along with > another file it depends on. (I don't think they depend on anythng > else...) > > Let me know if you find this useful and want to be informed of > updates, or if you think we can collaborate in some useful way. > > Ryan > > > On Jan 8, 2008 5:15 PM, Stef Mientki wrote: > > hi Jasper, > > > > you might be interested in the framework I'm building right now (will be > > released under BSD), > > Look here, and be sure to watch the demo at the bottom of the page > first: > > > > > http://oase.uci.kun.nl/~mientki/data_www/pylab_works/pw_animations_screenshots.html > > If you're interested, you can also contact me offline. > > > > cheers, > > Stef Mientki > > > > > > Jasper Stolte wrote: > > > Hi guys, > > > > > > I'm new to the list, nice to meet you all. Anyway my question is: Is > > > there already someone developing some sort of Control Systems Toolbox > > > / Robust Control Toolbox equivalent for SciPy? I would love to see it > > > added, and I am thinking of building something from scratch. Obviously > > > that wouldn't make much sense if other people are already working on > > > something similar. > > > > > > Regards, > > > Jasper > > > > ------------------------------------------------------------------------ > > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ceball at users.sourceforge.net Wed Jan 9 04:09:22 2008 From: ceball at users.sourceforge.net (C. Ball) Date: Wed, 9 Jan 2008 09:09:22 +0000 (UTC) Subject: [SciPy-user] weave and numpy.array(dtype=object) Message-ID: Hi, I wonder if someone could help me to use numpy arrays of type object_ (dtype="O") with weave? I have the following class: import weave class TestClass(object): def __call__(self, some_array): code = """ printf("%f\\n",1.0); """ weave.inline(code, ['some_array'], local_dict=locals(),verbose=1) If I create an instance of this class and then call it with an array of floats (or ints, etc), it compiles fine. But, if I try to call the class with an array of python objects, I get the following error: Traceback (most recent call last): File "", line 1, in File "test_weave.py", line 74, in __call__ inline(code, ['some_array'], local_dict=locals()) File "test_weave.py", line 16, in inline weave.inline(*params,**named_params) File "[weave location]/weave/inline_tools.py", line 333, in inline **kw) File "[weave location]/weave/inline_tools.py", line 459, in compile_function verbose=verbose, **kw) File "[weave location]/weave/ext_tools.py", line 353, in compile kw,file = self.build_kw_and_file(location,kw) File "[weave location]/weave/ext_tools.py", line 334, in build_kw_and_file file = self.generate_file(location=location) File "[weave location]/weave/ext_tools.py", line 295, in generate_file code = self.module_code() File "[weave location]/weave/ext_tools.py", line 203, in module_code self.function_code(), File "[weave location]/weave/ext_tools.py", line 269, in function_code all_function_code += func.function_code() File "[weave location]/weave/inline_tools.py", line 77, in function_code decl_code = indent(self.arg_declaration_code(),4) File "[weave location]/weave/inline_tools.py", line 62, in arg_declaration_code for arg in self.arg_specs] File "[weave location]/weave/standard_array_spec.py", line 158, in declaration_code res = self.template_vars(inline=inline) File "[weave location]/weave/standard_array_spec.py", line 151, in template_vars res['num_type'] = num_to_c_types[self.var_type] KeyError: 'O' I'm using a copy of weave that I just checked out from SVN: URL: http://svn.scipy.org/svn/scipy/trunk/scipy/weave Revision: 3809 Last Changed Rev: 3736 Last Changed Date: 2007-12-28 06:53:35 +0800 (Fri, 28 Dec 2007) Properties Last Updated: 2008-01-09 15:16:42 +0800 (Wed, 09 Jan 2008) (and Python 2.5.1 with numpy 1.0.2). I couldn't find anything in the weave tutorial (http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/weave/doc/tutorial.txt), the numpy book, or the Numeric documentation, so please tell me if I have missed something obvious. Thanks, Chris From massimo.sandal at unibo.it Wed Jan 9 04:31:16 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 09 Jan 2008 10:31:16 +0100 Subject: [SciPy-user] New Python Releases and SciPy In-Reply-To: References: <47839BBD.6060506@enthought.com> Message-ID: <47849464.8030207@unibo.it> Tom Johnson ha scritto: > Currently [1], 2008-08 is the targeted release for 3.0. As there will > be developer releases, is scipy planning to have separate a branch > dedicated to moving to 3.0? It seems that a transition can begin now, > rather than after 3.0 is released, and the ideal situation (in > dreamland) would be that scipy was near ready by the time 3.0 is > released. Also, what is the overall plan with respect to 2.6 and 3.0? > Will scipy be maintaining two branches, or will it be demanded that > people upgrade to 3.0 when scipy upgrades to 3.0? I have also read that the python guys are preparing a 2.x --> 3.0 "converter": http://www.python.org/dev/peps/pep-3000/#compatibility-and-transition The "Py3k warnings mode" seems also incredibly useful to me. Are there some scipy specific issues about the transition m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From laytonjb at charter.net Wed Jan 9 06:12:10 2008 From: laytonjb at charter.net (laytonjb at charter.net) Date: Wed, 9 Jan 2008 3:12:10 -0800 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: <89198da10801090112u4cb8cc2o7636d63390b264aa@mail.gmail.com> Message-ID: <20080109061210.UX2I7.745771.root@fepweb14> ---- Jasper Stolte wrote: > > Jeff: > If you are willing to release the code under BSD or MIT licence, I think > your toolbox is extremely relevant. Could send a link to it via the mailing > list? Let me look at the licenses, but I don't see a problem in general (it's been out under GPL for about 12 years :) ). I've actually got the toolbox on my home machine and I'm away from it for the momemt. I'll be back this weekend and I can upload the code somewhere and indicate where it is. If anyone wants to grab it now, you can go to the rlab page at sourceforge and download the whole package. The toolbox is in there somewhere (it's not too big). Thanks! Jeff > > Greetz, > Jasper > > > On 1/9/08, Ryan Krauss wrote: > > > > I have started a very basic one and will keep working on it this > > semester as I teach a controls class. I have defined a > > TransferFunction class (sort of derived from signal.lti), given it > > methods for basic block diagram algebra, root locus, and Bode control > > design and some c2d stuff. I am planning to release it under a BSD > > license. It is a bit messy right now. It is attached along with > > another file it depends on. (I don't think they depend on anythng > > else...) > > > > Let me know if you find this useful and want to be informed of > > updates, or if you think we can collaborate in some useful way. > > > > Ryan > > > > > > On Jan 8, 2008 5:15 PM, Stef Mientki wrote: > > > hi Jasper, > > > > > > you might be interested in the framework I'm building right now (will be > > > released under BSD), > > > Look here, and be sure to watch the demo at the bottom of the page > > first: > > > > > > > > http://oase.uci.kun.nl/~mientki/data_www/pylab_works/pw_animations_screenshots.html > > > If you're interested, you can also contact me offline. > > > > > > cheers, > > > Stef Mientki > > > > > > > > > Jasper Stolte wrote: > > > > Hi guys, > > > > > > > > I'm new to the list, nice to meet you all. Anyway my question is: Is > > > > there already someone developing some sort of Control Systems Toolbox > > > > / Robust Control Toolbox equivalent for SciPy? I would love to see it > > > > added, and I am thinking of building something from scratch. Obviously > > > > that wouldn't make much sense if other people are already working on > > > > something similar. > > > > > > > > Regards, > > > > Jasper > > > > > > ------------------------------------------------------------------------ > > > > > > > > > > > _______________________________________________ > > > > SciPy-user mailing list > > > > SciPy-user at scipy.org > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > From ryanlists at gmail.com Wed Jan 9 08:39:56 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 9 Jan 2008 07:39:56 -0600 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: <89198da10801090112u4cb8cc2o7636d63390b264aa@mail.gmail.com> References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> <478403F6.3000302@ru.nl> <89198da10801090112u4cb8cc2o7636d63390b264aa@mail.gmail.com> Message-ID: Sounds good. FYI, the course I am teaching is just a first course in control design. We won't do any robust control or anything. So, I don't anticipate adding any new major areas beyond block diagram algebra, root locus, and Bode design. I mainly anticipate cleaning that stuff up, adding some bells and whisltes, and then adding exampes and documentation. Ryan On Jan 9, 2008 3:12 AM, Jasper Stolte wrote: > Ryan: > Excellent, maybe it's best then if I wait for a few weeks for all your crazy > last-minute modifications to settle, and start working with you to make this > into a full-fledged control systems toolbox? > > Jeff: > If you are willing to release the code under BSD or MIT licence, I think > your toolbox is extremely relevant. Could send a link to it via the mailing > list? > > Greetz, > Jasper > > > > > On 1/9/08, Ryan Krauss wrote: > > I have started a very basic one and will keep working on it this > > semester as I teach a controls class. I have defined a > > TransferFunction class (sort of derived from signal.lti), given it > > methods for basic block diagram algebra, root locus, and Bode control > > design and some c2d stuff. I am planning to release it under a BSD > > license. It is a bit messy right now. It is attached along with > > another file it depends on. (I don't think they depend on anythng > > else...) > > > > Let me know if you find this useful and want to be informed of > > updates, or if you think we can collaborate in some useful way. > > > > Ryan > > > > > > On Jan 8, 2008 5:15 PM, Stef Mientki wrote: > > > hi Jasper, > > > > > > you might be interested in the framework I'm building right now (will be > > > released under BSD), > > > Look here, and be sure to watch the demo at the bottom of the page > first: > > > > > > > http://oase.uci.kun.nl/~mientki/data_www/pylab_works/pw_animations_screenshots.html > > > If you're interested, you can also contact me offline. > > > > > > cheers, > > > Stef Mientki > > > > > > > > > Jasper Stolte wrote: > > > > Hi guys, > > > > > > > > I'm new to the list, nice to meet you all. Anyway my question is: Is > > > > there already someone developing some sort of Control Systems Toolbox > > > > / Robust Control Toolbox equivalent for SciPy? I would love to see it > > > > added, and I am thinking of building something from scratch. Obviously > > > > that wouldn't make much sense if other people are already working on > > > > something similar. > > > > > > > > Regards, > > > > Jasper > > > > > ------------------------------------------------------------------------ > > > > > > > > > > > _______________________________________________ > > > > SciPy-user mailing list > > > > SciPy-user at scipy.org > > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From s.mientki at ru.nl Wed Jan 9 13:45:07 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Wed, 09 Jan 2008 19:45:07 +0100 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: <89198da10801090105h68e27db2o8bdb34f547107fdf@mail.gmail.com> References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> <478403F6.3000302@ru.nl> <89198da10801090105h68e27db2o8bdb34f547107fdf@mail.gmail.com> Message-ID: <47851633.1020304@ru.nl> hi Jasper, Jasper Stolte wrote: > Hi Stef, > > I'm certainly interested in your framework, and enjoyed watching it > grow over the last months in the wx-Python mailing list. However, I > dont think it is appropriate to include it into a core toolbox of > SciPy, which should be about the science, not the UI / framework I totally agree ! I think there are 2 kind of users of Scipy, the science oriented people and the engineering oriented people. And for the second group of people there's a huge gab between Scipy and packages like MatLab/LabView: compare for instance documentation and user-interface. So it would be very welcome if engineers had a better entrance to Scipy. btw, I found another one this week "Pyphant" > . I'll certainly keep it in mind for development of higher level > applications, maybe even a loop shaping desing tool or something. > I'll see if I can do something with the material from Ryan. cheers, Stef From s.mientki at ru.nl Wed Jan 9 13:54:43 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Wed, 09 Jan 2008 19:54:43 +0100 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> <478403F6.3000302@ru.nl> Message-ID: <47851873.1050807@ru.nl> hi Ryan, Ryan Krauss wrote: > I have started a very basic one and will keep working on it this > semester as I teach a controls class. I have defined a > TransferFunction class (sort of derived from signal.lti), given it > methods for basic block diagram algebra, root locus, and Bode control > design and some c2d stuff. I am planning to release it under a BSD > license. It is a bit messy right now. It is attached along with > another file it depends on. (I don't think they depend on anythng > else...) > > Let me know if you find this useful and want to be informed of > updates, or if you think we can collaborate in some useful way. > > I think it's very useful, as Control Systems should be one of the items encapsulated in PyLab_Works. I'll see what I can do in the next couple of days to embed your libraries. I might need some help (examples), as my control theory is quit rusty (educated in "Control Systems Theory" from Olle Elgerd, 1967 ;-) Although PyLab-Works is still quit premature, I think it would be an excellent candidate for education (at the moment I implementing combined help/instruction/fill-in forms). So I would love to see your course material / instruction / assignments etc, to see if they can be embedded too. So maybe not for the current course of you, but the next course you might be able to use. cheers, Stef From jasperstolte at gmail.com Wed Jan 9 15:00:38 2008 From: jasperstolte at gmail.com (Jasper Stolte) Date: Wed, 9 Jan 2008 21:00:38 +0100 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: <47851873.1050807@ru.nl> References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> <478403F6.3000302@ru.nl> <47851873.1050807@ru.nl> Message-ID: <89198da10801091200m3e1e35e5w68d00a5480dba408@mail.gmail.com> Wow Stef, You're really enthousiastic here! :) Good going. How's about we get the basic functionality up, and you help building engineering/education tools from it? Jeff: 467 kb of code.. Nice and complete, basic control theory seems to be all there. I guess I'm lucky there's already basic LTI-system functionality in SciPy's signal processing toolbox. Ryan: I'll browse your code tonight, and see how you implemented the basic system block. I'd better give the structure some thought so I can extend it easily to LQG/H2/Hinf later. Looks like I got my work cut out for me.. Greetz, Jasper On Jan 9, 2008 7:54 PM, Stef Mientki wrote: > hi Ryan, > > Ryan Krauss wrote: > > I have started a very basic one and will keep working on it this > > semester as I teach a controls class. I have defined a > > TransferFunction class (sort of derived from signal.lti), given it > > methods for basic block diagram algebra, root locus, and Bode control > > design and some c2d stuff. I am planning to release it under a BSD > > license. It is a bit messy right now. It is attached along with > > another file it depends on. (I don't think they depend on anythng > > else...) > > > > Let me know if you find this useful and want to be informed of > > updates, or if you think we can collaborate in some useful way. > > > > > I think it's very useful, as Control Systems should be one of the items > encapsulated in PyLab_Works. > I'll see what I can do in the next couple of days to embed your libraries. > I might need some help (examples), as my control theory is quit rusty > (educated in "Control Systems Theory" from Olle Elgerd, 1967 ;-) > Although PyLab-Works is still quit premature, > I think it would be an excellent candidate for education > (at the moment I implementing combined help/instruction/fill-in forms). > So I would love to see your course material / instruction / assignments > etc, > to see if they can be embedded too. > So maybe not for the current course of you, > but the next course you might be able to use. > > cheers, > Stef > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.mientki at ru.nl Wed Jan 9 15:58:08 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Wed, 09 Jan 2008 21:58:08 +0100 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: <89198da10801091200m3e1e35e5w68d00a5480dba408@mail.gmail.com> References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> <478403F6.3000302@ru.nl> <47851873.1050807@ru.nl> <89198da10801091200m3e1e35e5w68d00a5480dba408@mail.gmail.com> Message-ID: <47853560.8030403@ru.nl> hi Jasper, > Wow Stef, > You're really enthousiastic here! :) Good going. How's about we get > the basic functionality up, and you help building > engineering/education tools from it? Very good idea, on my site it's a deal. Very nice to see that there are other enthousiastic people here. cheers, Stef From schut at sarvision.nl Thu Jan 10 05:13:01 2008 From: schut at sarvision.nl (Vincent Schut) Date: Thu, 10 Jan 2008 11:13:01 +0100 Subject: [SciPy-user] ndimage zoom axes selection Message-ID: Hi, I want to use ndimage.zoom on my arrays, but only on the last 2 axes. E.g. I have an array of shape (46, 2, 7, 256, 256) that I want to become (46, 2, 7, 512, 512). Actually, my array is a multidimensional stack of 2d images, and I want just those 2d images to be interpolated. Currently I just loop over the other dimensions, but maybe there is a better way? Or maybe ndimage.zoom could be changed to accept a tuple as zoom factor, that would set the zoom factor per axis, so I could use ndimage.zoom(input, (1, 1, 1, 2, 2))? Regards, Vincent Schut. From lfriedri at imtek.de Thu Jan 10 08:04:19 2008 From: lfriedri at imtek.de (Lars Friedrich) Date: Thu, 10 Jan 2008 14:04:19 +0100 Subject: [SciPy-user] Mie Message-ID: <478617D3.7020404@imtek.de> Hello, is there some open source code for Python/scipy that will compute the Mie coefficients in light scattering for different particle sizes, refractive indices, and scattering angles? Lars From jmartinezs at sii.cl Thu Jan 10 09:34:14 2008 From: jmartinezs at sii.cl (Juan Martinez) Date: Thu, 10 Jan 2008 11:34:14 -0300 Subject: [SciPy-user] I'm a newbie in Python ans SciPy... so i've a little question Message-ID: <001101c85395$df931510$4301210a@sii.cl> Hello Everybody: I'm a newbie user of Scipy, and i try to get a determinant of a matrix, but the matrix object is modified for the method det() of linalg A little piece of with example.. CODE #!/usr/bin/env python # coding: latin-1 # Modules import from numarray import * from scipy.linalg import * ar = array([[4.0, -2.0, 1.0], \ [-2.0, 4.0, -2.0], \ [1.0, -2.0, 3.0]]) print 'Before det() ar = \n', ar print det(ar) # Get determinant of ar print 'After det() ar = \n', ar END CODE ON SCREEN Before det() ar = [[ 4. -2. 1.] [-2. 4. -2.] [ 1. -2. 3.]] After det() ar = [[ 4. -0.5 0.25] [-2. 3. -0.5 ] [ 1. -1.5 2. ]] END OUTPUT Please, help me to know what i'm doing wrong. Thank you in advance -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Thu Jan 10 09:48:21 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 10 Jan 2008 15:48:21 +0100 Subject: [SciPy-user] I'm a newbie in Python ans SciPy... so i've a little question In-Reply-To: <001101c85395$df931510$4301210a@sii.cl> References: <001101c85395$df931510$4301210a@sii.cl> Message-ID: Hi, from numarray import * > from scipy.linalg import * > You should not do this. Besides, you should use numpy instead of numarray, more evolved and Scipy relies on numpy. Try the same, but with import numpy as n import scipy.linalg as linalg and then linalg.det(ar) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Thu Jan 10 09:53:59 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 10 Jan 2008 15:53:59 +0100 Subject: [SciPy-user] I'm a newbie in Python ans SciPy... so i've a little question In-Reply-To: <001101c85395$df931510$4301210a@sii.cl> References: <001101c85395$df931510$4301210a@sii.cl> Message-ID: <80c99e790801100653w3e0fa02buc9bab34c2a25d46@mail.gmail.com> no idea... but if you use numpy, no problem! #import numarray as N import numpy as N import scipy.linalg as S ar = N.array([[4.0, -2.0, 1.0], \ [-2.0, 4.0, -2.0], \ [1.0, -2.0, 3.0]]) print 'Before det() ar = \n', ar print N.det(ar) # Get determinant of ar print 'After det() ar = \n', ar hth, L. On 1/10/08, Juan Martinez wrote: > > Hello Everybody: > > I'm a newbie user of Scipy, and i try to get a determinant of a matrix, > but the matrix object is modified for the method det() of linalg > > A little piece of with example.. > > CODE > > #!/usr/bin/env python > # coding: latin-1 > > # Modules import > from numarray import * > from scipy.linalg import * > > ar = array([[4.0, -2.0, 1.0], \ > [-2.0, 4.0, -2.0], \ > [1.0, -2.0, 3.0]]) > print 'Before det() ar = \n', ar > > print det(ar) # Get > determinant of ar > print 'After det() ar = \n', ar > > END CODE > > > ON SCREEN > > Before det() ar = > [[ 4. -2. 1.] > [-2. 4. -2.] > [ 1. -2. 3.]] > After det() ar = > [[ 4. -0.5 0.25] > [-2. 3. -0.5 ] > [ 1. -1.5 2. ]] > > END OUTPUT > > Please, help me to know what i'm doing wrong. > > Thank you in advance > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Thu Jan 10 09:55:18 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 10 Jan 2008 09:55:18 -0500 Subject: [SciPy-user] I'm a newbie in Python ans SciPy... so i've a little question In-Reply-To: <001101c85395$df931510$4301210a@sii.cl> References: <001101c85395$df931510$4301210a@sii.cl> Message-ID: I no longer have numarray, but that is probably not the issue. I do not see this behavior with the NumPy and SciPy Windows binaries. Cheers, Alan Isaac Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as N >>> N.__version__ '1.0.4' >>> import scipy as S >>> S.__version__ '0.6.0' >>> import numpy.linalg as L >>> ar = N.array([[4.0, -2.0, 1.0], \ ... [-2.0, 4.0, -2.0], \ ... [1.0, -2.0, 3.0]]) >>> ar array([[ 4., -2., 1.], [-2., 4., -2.], [ 1., -2., 3.]]) >>> L.det(ar) 24.0 >>> ar array([[ 4., -2., 1.], [-2., 4., -2.], [ 1., -2., 3.]]) >>> import scipy.linalg as Ls >>> Ls.det(ar) 24.0 >>> ar array([[ 4., -2., 1.], [-2., 4., -2.], [ 1., -2., 3.]]) From lbolla at gmail.com Thu Jan 10 09:55:53 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 10 Jan 2008 15:55:53 +0100 Subject: [SciPy-user] I'm a newbie in Python ans SciPy... so i've a little question In-Reply-To: <80c99e790801100653w3e0fa02buc9bab34c2a25d46@mail.gmail.com> References: <001101c85395$df931510$4301210a@sii.cl> <80c99e790801100653w3e0fa02buc9bab34c2a25d46@mail.gmail.com> Message-ID: <80c99e790801100655i2548098fhdc26e7120796007f@mail.gmail.com> ops, I meant: print S.det(ar) # Get determinant of ar sorry for the noise, L. On 1/10/08, lorenzo bolla wrote: > > no idea... > but if you use numpy, no problem! > > > #import numarray as N > import numpy as N > import scipy.linalg as S > > ar = N.array([[4.0, -2.0, 1.0], \ > [-2.0, 4.0, -2.0], \ > [1.0, -2.0, 3.0]]) > print 'Before det() ar = \n', ar > > print N.det(ar) # Get > determinant of ar > print 'After det() ar = \n', ar > > > hth, > L. > > > On 1/10/08, Juan Martinez wrote: > > > Hello Everybody: > > > > I'm a newbie user of Scipy, and i try to get a determinant of a matrix, > > but the matrix object is modified for the method det() of linalg > > > > A little piece of with example.. > > > > CODE > > > > #!/usr/bin/env python > > # coding: latin-1 > > > > # Modules import > > from numarray import * > > from scipy.linalg import * > > > > ar = array([[4.0, -2.0, 1.0], \ > > [-2.0, 4.0, -2.0], \ > > [1.0, -2.0, 3.0]]) > > print 'Before det() ar = \n', ar > > > > print det(ar) # Get > > determinant of ar > > print 'After det() ar = \n', ar > > > > END CODE > > > > > > ON SCREEN > > > > Before det() ar = > > [[ 4. -2. 1.] > > [-2. 4. -2.] > > [ 1. -2. 3.]] > > After det() ar = > > [[ 4. -0.5 0.25] > > [-2. 3. -0.5 ] > > [ 1. -1.5 2. ]] > > > > END OUTPUT > > > > Please, help me to know what i'm doing wrong. > > > > Thank you in advance > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > -- > Lorenzo Bolla > lbolla at gmail.com > http://lorenzobolla.emurse.com/ -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Thu Jan 10 09:58:43 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 10 Jan 2008 15:58:43 +0100 Subject: [SciPy-user] Mie In-Reply-To: <478617D3.7020404@imtek.de> References: <478617D3.7020404@imtek.de> Message-ID: <80c99e790801100658g98d57b3i399bf8501cad993a@mail.gmail.com> If you found it, or if you'd like to write your own, please let me know. I'm trying to collect some useful electromagnetic routines in one package called EMpy (empy.sourceforge.net) and a piece of code like this might be a useful inclusion. Regards, Lorenzo On 1/10/08, Lars Friedrich wrote: > > Hello, > > is there some open source code for Python/scipy that will compute the > Mie coefficients in light scattering for different particle sizes, > refractive indices, and scattering angles? > > Lars > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.mientki at ru.nl Thu Jan 10 10:10:01 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Thu, 10 Jan 2008 16:10:01 +0100 Subject: [SciPy-user] Mie In-Reply-To: <80c99e790801100658g98d57b3i399bf8501cad993a@mail.gmail.com> References: <478617D3.7020404@imtek.de> <80c99e790801100658g98d57b3i399bf8501cad993a@mail.gmail.com> Message-ID: <47863549.8030505@ru.nl> hi Lorenzo, lorenzo bolla wrote: > If you found it, or if you'd like to write your own, please let me know. > I'm trying to collect some useful electromagnetic routines in one > package called EMpy (empy.sourceforge.net > ) and a piece of code like this might be > a useful inclusion. it this of any interest ? http://www.mare.ee/indrek/ephi/ cheers, Stef > > Regards, > Lorenzo > > > On 1/10/08, *Lars Friedrich* > wrote: > > Hello, > > is there some open source code for Python/scipy that will compute the > Mie coefficients in light scattering for different particle sizes, > refractive indices, and scattering angles? > > Lars > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > -- > Lorenzo Bolla > lbolla at gmail.com > http://lorenzobolla.emurse.com/ > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Het UMC St Radboud staat geregistreerd bij de Kamer van Koophandel in het handelsregister onder nummer 41055629. The Radboud University Nijmegen Medical Centre is listed in the Commercial Register of the Chamber of Commerce under file number 41055629. From jmartinezs at sii.cl Thu Jan 10 10:22:49 2008 From: jmartinezs at sii.cl (Juan Martinez) Date: Thu, 10 Jan 2008 12:22:49 -0300 Subject: [SciPy-user] SciPy-user Digest, Vol 53, Issue 18 In-Reply-To: References: Message-ID: <001f01c8539c$a90f1280$4301210a@sii.cl> Lorenzo, Stef... Thank you very much for your help, It was very usefull Juan -----Mensaje original----- De: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] En nombre de scipy-user-request at scipy.org Enviado el: Jueves, 10 de Enero de 2008 12:10 Para: scipy-user at scipy.org Asunto: SciPy-user Digest, Vol 53, Issue 18 Send SciPy-user mailing list submissions to scipy-user at scipy.org To subscribe or unsubscribe via the World Wide Web, visit http://projects.scipy.org/mailman/listinfo/scipy-user or, via email, send a message with subject or body 'help' to scipy-user-request at scipy.org You can reach the person managing the list at scipy-user-owner at scipy.org When replying, please edit your Subject line so it is more specific than "Re: Contents of SciPy-user digest..." Today's Topics: 1. Re: I'm a newbie in Python ans SciPy... so i've a little question (lorenzo bolla) 2. Re: I'm a newbie in Python ans SciPy... so i've a little question (Alan G Isaac) 3. Re: I'm a newbie in Python ans SciPy... so i've a little question (lorenzo bolla) 4. Re: Mie (lorenzo bolla) 5. Re: Mie (Stef Mientki) ---------------------------------------------------------------------- Message: 1 Date: Thu, 10 Jan 2008 15:53:59 +0100 From: "lorenzo bolla" Subject: Re: [SciPy-user] I'm a newbie in Python ans SciPy... so i've a little question To: jmartinezs at sii.cl, "SciPy Users List" Message-ID: <80c99e790801100653w3e0fa02buc9bab34c2a25d46 at mail.gmail.com> Content-Type: text/plain; charset="iso-8859-1" no idea... but if you use numpy, no problem! #import numarray as N import numpy as N import scipy.linalg as S ar = N.array([[4.0, -2.0, 1.0], \ [-2.0, 4.0, -2.0], \ [1.0, -2.0, 3.0]]) print 'Before det() ar = \n', ar print N.det(ar) # Get determinant of ar print 'After det() ar = \n', ar hth, L. On 1/10/08, Juan Martinez wrote: > > Hello Everybody: > > I'm a newbie user of Scipy, and i try to get a determinant of a > matrix, but the matrix object is modified for the method det() of > linalg > > A little piece of with example.. > > CODE > > #!/usr/bin/env python > # coding: latin-1 > > # Modules import > from numarray import * > from scipy.linalg import * > > ar = array([[4.0, -2.0, 1.0], \ > [-2.0, 4.0, -2.0], \ > [1.0, -2.0, 3.0]]) > print 'Before det() ar = \n', ar > > print det(ar) # Get > determinant of ar > print 'After det() ar = \n', ar > > END CODE > > > ON SCREEN > > Before det() ar = > [[ 4. -2. 1.] > [-2. 4. -2.] > [ 1. -2. 3.]] > After det() ar = > [[ 4. -0.5 0.25] > [-2. 3. -0.5 ] > [ 1. -1.5 2. ]] > > END OUTPUT > > Please, help me to know what i'm doing wrong. > > Thank you in advance > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20080110/9a8ea20c /attachment-0001.html ------------------------------ Message: 2 Date: Thu, 10 Jan 2008 09:55:18 -0500 From: Alan G Isaac Subject: Re: [SciPy-user] I'm a newbie in Python ans SciPy... so i've a little question To: scipy-user at scipy.org Message-ID: Content-Type: TEXT/PLAIN; CHARSET=UTF-8 I no longer have numarray, but that is probably not the issue. I do not see this behavior with the NumPy and SciPy Windows binaries. Cheers, Alan Isaac Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as N >>> N.__version__ '1.0.4' >>> import scipy as S >>> S.__version__ '0.6.0' >>> import numpy.linalg as L >>> ar = N.array([[4.0, -2.0, 1.0], \ ... [-2.0, 4.0, -2.0], \ ... [1.0, -2.0, 3.0]]) >>> ar array([[ 4., -2., 1.], [-2., 4., -2.], [ 1., -2., 3.]]) >>> L.det(ar) 24.0 >>> ar array([[ 4., -2., 1.], [-2., 4., -2.], [ 1., -2., 3.]]) >>> import scipy.linalg as Ls >>> Ls.det(ar) 24.0 >>> ar array([[ 4., -2., 1.], [-2., 4., -2.], [ 1., -2., 3.]]) ------------------------------ Message: 3 Date: Thu, 10 Jan 2008 15:55:53 +0100 From: "lorenzo bolla" Subject: Re: [SciPy-user] I'm a newbie in Python ans SciPy... so i've a little question To: jmartinezs at sii.cl, "SciPy Users List" Message-ID: <80c99e790801100655i2548098fhdc26e7120796007f at mail.gmail.com> Content-Type: text/plain; charset="iso-8859-1" ops, I meant: print S.det(ar) # Get determinant of ar sorry for the noise, L. On 1/10/08, lorenzo bolla wrote: > > no idea... > but if you use numpy, no problem! > > > #import numarray as N > import numpy as N > import scipy.linalg as S > > ar = N.array([[4.0, -2.0, 1.0], \ > [-2.0, 4.0, -2.0], \ > [1.0, -2.0, 3.0]]) > print 'Before det() ar = \n', ar > > print N.det(ar) # Get > determinant of ar > print 'After det() ar = \n', ar > > > hth, > L. > > > On 1/10/08, Juan Martinez wrote: > > > Hello Everybody: > > > > I'm a newbie user of Scipy, and i try to get a determinant of a > > matrix, but the matrix object is modified for the method det() of > > linalg > > > > A little piece of with example.. > > > > CODE > > > > #!/usr/bin/env python > > # coding: latin-1 > > > > # Modules import > > from numarray import * > > from scipy.linalg import * > > > > ar = array([[4.0, -2.0, 1.0], \ > > [-2.0, 4.0, -2.0], \ > > [1.0, -2.0, 3.0]]) > > print 'Before det() ar = \n', ar > > > > print det(ar) # Get > > determinant of ar > > print 'After det() ar = \n', ar > > > > END CODE > > > > > > ON SCREEN > > > > Before det() ar = > > [[ 4. -2. 1.] > > [-2. 4. -2.] > > [ 1. -2. 3.]] > > After det() ar = > > [[ 4. -0.5 0.25] > > [-2. 3. -0.5 ] > > [ 1. -1.5 2. ]] > > > > END OUTPUT > > > > Please, help me to know what i'm doing wrong. > > > > Thank you in advance > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > -- > Lorenzo Bolla > lbolla at gmail.com > http://lorenzobolla.emurse.com/ -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20080110/1e0766ab /attachment-0001.html ------------------------------ Message: 4 Date: Thu, 10 Jan 2008 15:58:43 +0100 From: "lorenzo bolla" Subject: Re: [SciPy-user] Mie To: "SciPy Users List" Message-ID: <80c99e790801100658g98d57b3i399bf8501cad993a at mail.gmail.com> Content-Type: text/plain; charset="iso-8859-1" If you found it, or if you'd like to write your own, please let me know. I'm trying to collect some useful electromagnetic routines in one package called EMpy (empy.sourceforge.net) and a piece of code like this might be a useful inclusion. Regards, Lorenzo On 1/10/08, Lars Friedrich wrote: > > Hello, > > is there some open source code for Python/scipy that will compute the > Mie coefficients in light scattering for different particle sizes, > refractive indices, and scattering angles? > > Lars > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20080110/c1571d63 /attachment-0001.html ------------------------------ Message: 5 Date: Thu, 10 Jan 2008 16:10:01 +0100 From: Stef Mientki Subject: Re: [SciPy-user] Mie To: SciPy Users List Message-ID: <47863549.8030505 at ru.nl> Content-Type: text/plain; charset=ISO-8859-1; format=flowed hi Lorenzo, lorenzo bolla wrote: > If you found it, or if you'd like to write your own, please let me know. > I'm trying to collect some useful electromagnetic routines in one > package called EMpy (empy.sourceforge.net > ) and a piece of code like this might be > a useful inclusion. it this of any interest ? http://www.mare.ee/indrek/ephi/ cheers, Stef > > Regards, > Lorenzo > > > On 1/10/08, *Lars Friedrich* > wrote: > > Hello, > > is there some open source code for Python/scipy that will compute the > Mie coefficients in light scattering for different particle sizes, > refractive indices, and scattering angles? > > Lars > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > -- > Lorenzo Bolla > lbolla at gmail.com > http://lorenzobolla.emurse.com/ > ---------------------------------------------------------------------- > -- > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Het UMC St Radboud staat geregistreerd bij de Kamer van Koophandel in het handelsregister onder nummer 41055629. The Radboud University Nijmegen Medical Centre is listed in the Commercial Register of the Chamber of Commerce under file number 41055629. ------------------------------ _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user End of SciPy-user Digest, Vol 53, Issue 18 ****************************************** From lbolla at gmail.com Thu Jan 10 10:29:41 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 10 Jan 2008 16:29:41 +0100 Subject: [SciPy-user] Mie In-Reply-To: <47863549.8030505@ru.nl> References: <478617D3.7020404@imtek.de> <80c99e790801100658g98d57b3i399bf8501cad993a@mail.gmail.com> <47863549.8030505@ru.nl> Message-ID: <80c99e790801100729v7caa2ff7hb83e0d16fe715f22@mail.gmail.com> Yes, thanks! I'll drop him a line. L. On 1/10/08, Stef Mientki wrote: > > hi Lorenzo, > > lorenzo bolla wrote: > > If you found it, or if you'd like to write your own, please let me know. > > I'm trying to collect some useful electromagnetic routines in one > > package called EMpy (empy.sourceforge.net > > ) and a piece of code like this might be > > a useful inclusion. > it this of any interest ? > http://www.mare.ee/indrek/ephi/ > > cheers, > Stef > > > > Regards, > > Lorenzo > > > > > > On 1/10/08, *Lars Friedrich* > > wrote: > > > > Hello, > > > > is there some open source code for Python/scipy that will compute > the > > Mie coefficients in light scattering for different particle sizes, > > refractive indices, and scattering angles? > > > > Lars > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > -- > > Lorenzo Bolla > > lbolla at gmail.com > > http://lorenzobolla.emurse.com/ > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > Het UMC St Radboud staat geregistreerd bij de Kamer van Koophandel in het > handelsregister onder nummer 41055629. > The Radboud University Nijmegen Medical Centre is listed in the Commercial > Register of the Chamber of Commerce under file number 41055629. > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmartinezs at sii.cl Thu Jan 10 10:45:08 2008 From: jmartinezs at sii.cl (Juan Martinez) Date: Thu, 10 Jan 2008 12:45:08 -0300 Subject: [SciPy-user] SciPy-user Digest, Vol 53, Issue 18 In-Reply-To: References: Message-ID: <002001c8539f$c733def0$4301210a@sii.cl> Thanks Alan, after read all the mails, i have a better understandig of Scipy and Python Juan -----Mensaje original----- De: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] En nombre de scipy-user-request at scipy.org Enviado el: Jueves, 10 de Enero de 2008 12:10 Para: scipy-user at scipy.org Asunto: SciPy-user Digest, Vol 53, Issue 18 Send SciPy-user mailing list submissions to scipy-user at scipy.org To subscribe or unsubscribe via the World Wide Web, visit http://projects.scipy.org/mailman/listinfo/scipy-user or, via email, send a message with subject or body 'help' to scipy-user-request at scipy.org You can reach the person managing the list at scipy-user-owner at scipy.org When replying, please edit your Subject line so it is more specific than "Re: Contents of SciPy-user digest..." Today's Topics: 1. Re: I'm a newbie in Python ans SciPy... so i've a little question (lorenzo bolla) 2. Re: I'm a newbie in Python ans SciPy... so i've a little question (Alan G Isaac) 3. Re: I'm a newbie in Python ans SciPy... so i've a little question (lorenzo bolla) 4. Re: Mie (lorenzo bolla) 5. Re: Mie (Stef Mientki) ---------------------------------------------------------------------- Message: 1 Date: Thu, 10 Jan 2008 15:53:59 +0100 From: "lorenzo bolla" Subject: Re: [SciPy-user] I'm a newbie in Python ans SciPy... so i've a little question To: jmartinezs at sii.cl, "SciPy Users List" Message-ID: <80c99e790801100653w3e0fa02buc9bab34c2a25d46 at mail.gmail.com> Content-Type: text/plain; charset="iso-8859-1" no idea... but if you use numpy, no problem! #import numarray as N import numpy as N import scipy.linalg as S ar = N.array([[4.0, -2.0, 1.0], \ [-2.0, 4.0, -2.0], \ [1.0, -2.0, 3.0]]) print 'Before det() ar = \n', ar print N.det(ar) # Get determinant of ar print 'After det() ar = \n', ar hth, L. On 1/10/08, Juan Martinez wrote: > > Hello Everybody: > > I'm a newbie user of Scipy, and i try to get a determinant of a > matrix, but the matrix object is modified for the method det() of > linalg > > A little piece of with example.. > > CODE > > #!/usr/bin/env python > # coding: latin-1 > > # Modules import > from numarray import * > from scipy.linalg import * > > ar = array([[4.0, -2.0, 1.0], \ > [-2.0, 4.0, -2.0], \ > [1.0, -2.0, 3.0]]) > print 'Before det() ar = \n', ar > > print det(ar) # Get > determinant of ar > print 'After det() ar = \n', ar > > END CODE > > > ON SCREEN > > Before det() ar = > [[ 4. -2. 1.] > [-2. 4. -2.] > [ 1. -2. 3.]] > After det() ar = > [[ 4. -0.5 0.25] > [-2. 3. -0.5 ] > [ 1. -1.5 2. ]] > > END OUTPUT > > Please, help me to know what i'm doing wrong. > > Thank you in advance > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20080110/9a8ea20c /attachment-0001.html ------------------------------ Message: 2 Date: Thu, 10 Jan 2008 09:55:18 -0500 From: Alan G Isaac Subject: Re: [SciPy-user] I'm a newbie in Python ans SciPy... so i've a little question To: scipy-user at scipy.org Message-ID: Content-Type: TEXT/PLAIN; CHARSET=UTF-8 I no longer have numarray, but that is probably not the issue. I do not see this behavior with the NumPy and SciPy Windows binaries. Cheers, Alan Isaac Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as N >>> N.__version__ '1.0.4' >>> import scipy as S >>> S.__version__ '0.6.0' >>> import numpy.linalg as L >>> ar = N.array([[4.0, -2.0, 1.0], \ ... [-2.0, 4.0, -2.0], \ ... [1.0, -2.0, 3.0]]) >>> ar array([[ 4., -2., 1.], [-2., 4., -2.], [ 1., -2., 3.]]) >>> L.det(ar) 24.0 >>> ar array([[ 4., -2., 1.], [-2., 4., -2.], [ 1., -2., 3.]]) >>> import scipy.linalg as Ls >>> Ls.det(ar) 24.0 >>> ar array([[ 4., -2., 1.], [-2., 4., -2.], [ 1., -2., 3.]]) ------------------------------ Message: 3 Date: Thu, 10 Jan 2008 15:55:53 +0100 From: "lorenzo bolla" Subject: Re: [SciPy-user] I'm a newbie in Python ans SciPy... so i've a little question To: jmartinezs at sii.cl, "SciPy Users List" Message-ID: <80c99e790801100655i2548098fhdc26e7120796007f at mail.gmail.com> Content-Type: text/plain; charset="iso-8859-1" ops, I meant: print S.det(ar) # Get determinant of ar sorry for the noise, L. On 1/10/08, lorenzo bolla wrote: > > no idea... > but if you use numpy, no problem! > > > #import numarray as N > import numpy as N > import scipy.linalg as S > > ar = N.array([[4.0, -2.0, 1.0], \ > [-2.0, 4.0, -2.0], \ > [1.0, -2.0, 3.0]]) > print 'Before det() ar = \n', ar > > print N.det(ar) # Get > determinant of ar > print 'After det() ar = \n', ar > > > hth, > L. > > > On 1/10/08, Juan Martinez wrote: > > > Hello Everybody: > > > > I'm a newbie user of Scipy, and i try to get a determinant of a > > matrix, but the matrix object is modified for the method det() of > > linalg > > > > A little piece of with example.. > > > > CODE > > > > #!/usr/bin/env python > > # coding: latin-1 > > > > # Modules import > > from numarray import * > > from scipy.linalg import * > > > > ar = array([[4.0, -2.0, 1.0], \ > > [-2.0, 4.0, -2.0], \ > > [1.0, -2.0, 3.0]]) > > print 'Before det() ar = \n', ar > > > > print det(ar) # Get > > determinant of ar > > print 'After det() ar = \n', ar > > > > END CODE > > > > > > ON SCREEN > > > > Before det() ar = > > [[ 4. -2. 1.] > > [-2. 4. -2.] > > [ 1. -2. 3.]] > > After det() ar = > > [[ 4. -0.5 0.25] > > [-2. 3. -0.5 ] > > [ 1. -1.5 2. ]] > > > > END OUTPUT > > > > Please, help me to know what i'm doing wrong. > > > > Thank you in advance > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > -- > Lorenzo Bolla > lbolla at gmail.com > http://lorenzobolla.emurse.com/ -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20080110/1e0766ab /attachment-0001.html ------------------------------ Message: 4 Date: Thu, 10 Jan 2008 15:58:43 +0100 From: "lorenzo bolla" Subject: Re: [SciPy-user] Mie To: "SciPy Users List" Message-ID: <80c99e790801100658g98d57b3i399bf8501cad993a at mail.gmail.com> Content-Type: text/plain; charset="iso-8859-1" If you found it, or if you'd like to write your own, please let me know. I'm trying to collect some useful electromagnetic routines in one package called EMpy (empy.sourceforge.net) and a piece of code like this might be a useful inclusion. Regards, Lorenzo On 1/10/08, Lars Friedrich wrote: > > Hello, > > is there some open source code for Python/scipy that will compute the > Mie coefficients in light scattering for different particle sizes, > refractive indices, and scattering angles? > > Lars > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20080110/c1571d63 /attachment-0001.html ------------------------------ Message: 5 Date: Thu, 10 Jan 2008 16:10:01 +0100 From: Stef Mientki Subject: Re: [SciPy-user] Mie To: SciPy Users List Message-ID: <47863549.8030505 at ru.nl> Content-Type: text/plain; charset=ISO-8859-1; format=flowed hi Lorenzo, lorenzo bolla wrote: > If you found it, or if you'd like to write your own, please let me know. > I'm trying to collect some useful electromagnetic routines in one > package called EMpy (empy.sourceforge.net > ) and a piece of code like this might be > a useful inclusion. it this of any interest ? http://www.mare.ee/indrek/ephi/ cheers, Stef > > Regards, > Lorenzo > > > On 1/10/08, *Lars Friedrich* > wrote: > > Hello, > > is there some open source code for Python/scipy that will compute the > Mie coefficients in light scattering for different particle sizes, > refractive indices, and scattering angles? > > Lars > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > -- > Lorenzo Bolla > lbolla at gmail.com > http://lorenzobolla.emurse.com/ > ---------------------------------------------------------------------- > -- > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Het UMC St Radboud staat geregistreerd bij de Kamer van Koophandel in het handelsregister onder nummer 41055629. The Radboud University Nijmegen Medical Centre is listed in the Commercial Register of the Chamber of Commerce under file number 41055629. ------------------------------ _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user End of SciPy-user Digest, Vol 53, Issue 18 ****************************************** From timmichelsen at gmx-topmail.de Thu Jan 10 16:46:37 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Thu, 10 Jan 2008 22:46:37 +0100 Subject: [SciPy-user] Control Systems Toolbox In-Reply-To: <47851633.1020304@ru.nl> References: <89198da10801081216g656660cak946c36c2b348d601@mail.gmail.com> <478403F6.3000302@ru.nl> <89198da10801090105h68e27db2o8bdb34f547107fdf@mail.gmail.com> <47851633.1020304@ru.nl> Message-ID: Hello! > I think there are 2 kind of users of Scipy, > the science oriented people > and the engineering oriented people. > And for the second group of people there's a huge gab between Scipy and > packages like MatLab/LabView: > compare for instance documentation and user-interface. > So it would be very welcome if engineers had a better entrance to Scipy. I really like how you put this. Well said. One has to acknowledge that there are scientists that focus not mainly on computing tasks. This is only part of their work. For instance, some disciplines of natural sciences or earth sciences are mostly occupied with observing nature. They may not even take programming classes in university. Stating their analysis with a graphical tool really lowers the barriers and increases the efficiency because the the basic things are taken care for by the logic beneath. A advanced user would then start expanding via coding extensions or improvements. > btw, I found another one this week "Pyphant" Thanks for sharing this link (http://www.fmf.uni-freiburg.de/service/Servicegruppen/sg_wissinfo/Software/Pyphant) with us. It really looks promising and I will check it out. If one explores the paper linked from the page of Pyphant: http://indico.cern.ch/conferenceTimeTable.py?confId=44&detailLevel=contribution&viewMode=room http://archive.pythonpapers.org/ThePythonPapersVolume2Issue3.pdf a lot of other initiatives that develop GUI frontends can be found. Interestingly, each of them has a similar goal, sometimes even a same direction but starts nearly a new project from scratch. Please do also keep us updated on your work with Pylab. I am really interested and think it would be useful to scipy users for the reasons you listed and I expanded above. Kind regards, Timmie From tjhnson at gmail.com Thu Jan 10 21:02:00 2008 From: tjhnson at gmail.com (Tom Johnson) Date: Thu, 10 Jan 2008 18:02:00 -0800 Subject: [SciPy-user] allclose friend Message-ID: allclose() is neat in that it handles the 'special' cases of inf and nan. Does there exist a similar function close()? That is, I want to do elementwise float comparisons of two arrays (returning an array of booleans)...and I want all the special cases to be handled. It seems like this should be an obvious function. There are enough lines in allclose() that I don't want to have to reimplement it everytime on my own. From tjhnson at gmail.com Thu Jan 10 21:08:43 2008 From: tjhnson at gmail.com (Tom Johnson) Date: Thu, 10 Jan 2008 18:08:43 -0800 Subject: [SciPy-user] allclose friend In-Reply-To: References: Message-ID: On Jan 10, 2008 6:02 PM, Tom Johnson wrote: > allclose() is neat in that it handles the 'special' cases of inf and > nan. Does there exist a similar function close()? That is, I want to > do elementwise float comparisons of two arrays (returning an array of > booleans)...and I want all the special cases to be handled. It seems > like this should be an obvious function. There are enough lines in > allclose() that I don't want to have to reimplement it everytime on my > own. > I know I can just loop through the arrays calling allclose on singlets, but is this the preferred way to do it? From lbolla at gmail.com Fri Jan 11 03:44:07 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Fri, 11 Jan 2008 09:44:07 +0100 Subject: [SciPy-user] allclose friend In-Reply-To: References: Message-ID: <80c99e790801110044r1f0fee6ch78bc7f09c3ecbca1@mail.gmail.com> What do you expect to be handled NaN? It looks like allclose() gives False with NaN, because NaN==NaN is always False. Shouldn't it use numpy.isnan? Can't you use simply == to obtain a boolean array? In [21]: x Out[21]: array([ 1., 2., Inf, NaN]) In [22]: y Out[22]: array([ 1., 0., Inf, NaN]) In [23]: z Out[23]: array([ 1., 2., Inf, NaN]) In [24]: x == y Out[24]: array([ True, False, True, False], dtype=bool) In [25]: x == z Out[25]: array([ True, True, True, False], dtype=bool) In [26]: numpy.allclose(x,y) Out[26]: False In [27]: numpy.allclose(x,z) Out[27]: False L. On 1/11/08, Tom Johnson wrote: > > On Jan 10, 2008 6:02 PM, Tom Johnson wrote: > > allclose() is neat in that it handles the 'special' cases of inf and > > nan. Does there exist a similar function close()? That is, I want to > > do elementwise float comparisons of two arrays (returning an array of > > booleans)...and I want all the special cases to be handled. It seems > > like this should be an obvious function. There are enough lines in > > allclose() that I don't want to have to reimplement it everytime on my > > own. > > > > I know I can just loop through the arrays calling allclose on > singlets, but is this the preferred way to do it? > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.mientki at ru.nl Fri Jan 11 04:09:38 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 11 Jan 2008 10:09:38 +0100 Subject: [SciPy-user] is there an equivalent of ICALAB ? Message-ID: <47873252.9030504@ru.nl> hello, I would love to do some tests with ICALAB http://www.bsp.brain.riken.jp/ICALAB/ICALABSignalProc/ but don't want to go back to MatLab ;-) Is there something (similar) available in SciPy ? thanks, Stef Mientki Het UMC St Radboud staat geregistreerd bij de Kamer van Koophandel in het handelsregister onder nummer 41055629. The Radboud University Nijmegen Medical Centre is listed in the Commercial Register of the Chamber of Commerce under file number 41055629. From lorenzo.isella at gmail.com Fri Jan 11 04:43:07 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Fri, 11 Jan 2008 10:43:07 +0100 Subject: [SciPy-user] Upgrade Installation of scipy on FC5 Message-ID: Dear All, Quite some time ago I installed SciPy on my FC5 system following the suggestions given on the mailing list (see quoted message below). At the time that installed SciPy 0.5.x and now I would like to upgrade to the new release. How can I do that without ending up with a broken system? I am afraid of experimenting too much on this machine which has to be up and running all the time. BTW, meanwhile there has been a new NumPy release as well, so I suppose that has to be taken into account. Simply, I would like to know if I can blindly repeat the procedure followed last April or there is something I should do differently. Many thanks Lorenzo Message: 5 Date: Mon, 30 Apr 2007 09:23:45 -0700 From: "Jarrod Millman" Subject: Re: [SciPy-user] Installation SciPy on FC5 To: "SciPy Users List" Message-ID: Content-Type: text/plain; charset="iso-8859-1" On 4/30/07, Lorenzo Isella wrote: > > Dear All, > I am running Fedora Core 5 on my desktop at work and I would like to > install SciPy to run some simulations. > > Before ending up in dependency hell (I tried installing Blas and it > went like a breeze, but something went wrong when I tried with Atlas), > I would like to know if there is a smarter way of doing this. > Many thanks for your help. > You should be able to do something like this: http://projects.scipy.org/neuroimaging/ni/wiki/DevelopmentInstallFedora Just skip the NIPY-specific steps and you don't need to enable the models code in the sandbox. Good luck, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20070430/81b0d62b/attachment-0001.html ------------------------------ From tjhnson at gmail.com Fri Jan 11 13:24:20 2008 From: tjhnson at gmail.com (Tom Johnson) Date: Fri, 11 Jan 2008 10:24:20 -0800 Subject: [SciPy-user] allclose friend In-Reply-To: <80c99e790801110044r1f0fee6ch78bc7f09c3ecbca1@mail.gmail.com> References: <80c99e790801110044r1f0fee6ch78bc7f09c3ecbca1@mail.gmail.com> Message-ID: On Jan 11, 2008 12:44 AM, lorenzo bolla wrote: > What do you expect to be handled NaN? > It looks like allclose() gives False with NaN, because NaN==NaN is always > False. > Shouldn't it use numpy.isnan? > Can't you use simply == to obtain a boolean array? > > In [21]: x > Out[21]: array([ 1., 2., Inf, NaN]) > > In [22]: y > Out[22]: array([ 1., 0., Inf, NaN]) > > In [23]: z > Out[23]: array([ 1., 2., Inf, NaN]) > > In [24]: x == y > Out[24]: array([ True, False, True, False], dtype=bool) > > In [25]: x == z > Out[25]: array([ True, True, True, False], dtype=bool) > > In [26]: numpy.allclose(x,y) > Out[26]: False > > In [27]: numpy.allclose(x,z) > Out[27]: False > > L. Well, I'm not really interested in *only* testing NaN. My goal is for floating point comparisons...if my data happens to have NaN or infs being compared to floats/inf/NaN, then it want these handled properly as well. Most of all, I want a *single* command to do this. I tried vectorizing allclose, but it doesn't seem to allow me to specify atol or rtol anymore (which is required). >>> from scipy import allclose, vectorize >>> allclose_v = vectorize(allclose) >>> allclose_v([5.000000001,5,5],[5,5,inf]) array([ True, True, False], dtype=bool) >>> allclose_v([1.000000001,NaN],[1,NaN]) array([ True, True], dtype=bool) >>> v_allclose([1.00000000,inf],[1,NaN]) array([ True, False], dtype=bool) ...and it fails for inf comparisons... >>> v_allclose([1.00000001,inf],[1,inf]) : 0-d arrays can't be indexed Certainly, this should be handled properly giving [True, True]. Does that clarify the request? One function for elementwise float comparisons, properly handling inf, NaN....with atol, rtol options. From emanuele at relativita.com Fri Jan 11 17:04:39 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Fri, 11 Jan 2008 23:04:39 +0100 Subject: [SciPy-user] is there an equivalent of ICALAB ? In-Reply-To: <47873252.9030504@ru.nl> References: <47873252.9030504@ru.nl> Message-ID: <4787E7F7.30308@relativita.com> Stef Mientki wrote: > hello, > > I would love to do some tests with ICALAB > http://www.bsp.brain.riken.jp/ICALAB/ICALABSignalProc/ > but don't want to go back to MatLab ;-) > > Is there something (similar) available in SciPy ? > > thanks, > Stef Mientki > > If you are looking for ICA you can use MDP (no, it is not Markov Decision Process :) but Modular toolkit for Data Processing): http://mdp-toolkit.sourceforge.net/tutorial.html It is based on numpy. Here you can find a list of algorithms implemented, some of them being ICA (CuBICA, FastICA): http://mdp-toolkit.sourceforge.net/tutorial.html#node-list It has not GUI. And I'm very happy of that. Emanuele From beckers at orn.mpg.de Sat Jan 12 03:47:35 2008 From: beckers at orn.mpg.de (Gabriel J.L. Beckers) Date: Sat, 12 Jan 2008 09:47:35 +0100 Subject: [SciPy-user] is there an equivalent of ICALAB ? In-Reply-To: <47873252.9030504@ru.nl> References: <47873252.9030504@ru.nl> Message-ID: <1200127655.6811.9.camel@gabriel-desktop> Hi Stef, As Emanuele already said MDP is the way to go. It is highly recommended. It has two ICA algorithms. I ported a third one (JADE) to NumPy/Python, see http://www.gbeckers.nl/pages/neuroinf.html , where you can download it. I am in the process of integrating that code into MDP. Best, Gabriel On Fri, 2008-01-11 at 10:09 +0100, Stef Mientki wrote: > hello, > > I would love to do some tests with ICALAB > http://www.bsp.brain.riken.jp/ICALAB/ICALABSignalProc/ > but don't want to go back to MatLab ;-) > > Is there something (similar) available in SciPy ? > > thanks, > Stef Mientki > > > Het UMC St Radboud staat geregistreerd bij de Kamer van Koophandel in het handelsregister onder nummer 41055629. > The Radboud University Nijmegen Medical Centre is listed in the Commercial Register of the Chamber of Commerce under file number 41055629. > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From stefan at sun.ac.za Sat Jan 12 03:50:52 2008 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 12 Jan 2008 10:50:52 +0200 Subject: [SciPy-user] allclose friend In-Reply-To: References: Message-ID: <20080112085052.GC15436@mentat.za.net> Hi Tom How about something like import numpy as np def close(x,y,decimal=6): x = np.asarray(x) y = np.asarray(y) def compare(x, y, decimal=decimal): return np.around(abs(x-y),decimal) <= 10.0**(-decimal) out = np.zeros(len(x),dtype=bool) out |= (x == y) # handle inf out |= (np.isnan(x) & np.isnan(y)) # handle nan out |= compare(x,y) # handle rest return out Regards St?fan On Thu, Jan 10, 2008 at 06:02:00PM -0800, Tom Johnson wrote: > allclose() is neat in that it handles the 'special' cases of inf and > nan. Does there exist a similar function close()? That is, I want to > do elementwise float comparisons of two arrays (returning an array of > booleans)...and I want all the special cases to be handled. It seems > like this should be an obvious function. There are enough lines in > allclose() that I don't want to have to reimplement it everytime on my > own. From beckers at orn.mpg.de Sat Jan 12 04:04:29 2008 From: beckers at orn.mpg.de (Gabriel J.L. Beckers) Date: Sat, 12 Jan 2008 10:04:29 +0100 Subject: [SciPy-user] is there an equivalent of ICALAB ? In-Reply-To: <1200127655.6811.9.camel@gabriel-desktop> References: <47873252.9030504@ru.nl> <1200127655.6811.9.camel@gabriel-desktop> Message-ID: <1200128669.6811.25.camel@gabriel-desktop> And now that I am at it: if there are people interested in getting more blind source separation (BSS) techniques into SciPy, we should organize something to get this going. In neuroscience and other fields BSS/ICA is becoming increasingly important. It is a pity to see that there are so many good open source initiatives in neuroscience (e.g. EEGLAB and Chronux) that are based on MATLAB, while I am sure that by now everybody (user/programmer/taxpayer) would be much better off if they used Python. Gabriel On Sat, 2008-01-12 at 09:47 +0100, Gabriel J.L. Beckers wrote: > Hi Stef, > > As Emanuele already said MDP is the way to go. It is highly recommended. > > It has two ICA algorithms. I ported a third one (JADE) to NumPy/Python, > see > > http://www.gbeckers.nl/pages/neuroinf.html > > , where you can download it. I am in the process of integrating that > code into MDP. > > Best, Gabriel > > > On Fri, 2008-01-11 at 10:09 +0100, Stef Mientki wrote: > > hello, > > > > I would love to do some tests with ICALAB > > http://www.bsp.brain.riken.jp/ICALAB/ICALABSignalProc/ > > but don't want to go back to MatLab ;-) > > > > Is there something (similar) available in SciPy ? > > > > thanks, > > Stef Mientki > > > > > > Het UMC St Radboud staat geregistreerd bij de Kamer van Koophandel in het handelsregister onder nummer 41055629. > > The Radboud University Nijmegen Medical Centre is listed in the Commercial Register of the Chamber of Commerce under file number 41055629. > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From millman at berkeley.edu Sat Jan 12 05:24:43 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sat, 12 Jan 2008 02:24:43 -0800 Subject: [SciPy-user] is there an equivalent of ICALAB ? In-Reply-To: <1200128669.6811.25.camel@gabriel-desktop> References: <47873252.9030504@ru.nl> <1200127655.6811.9.camel@gabriel-desktop> <1200128669.6811.25.camel@gabriel-desktop> Message-ID: On Jan 12, 2008 1:04 AM, Gabriel J.L. Beckers wrote: > And now that I am at it: if there are people interested in getting more > blind source separation (BSS) techniques into SciPy, we should organize > something to get this going. In neuroscience and other fields BSS/ICA is > becoming increasingly important. It is a pity to see that there are so > many good open source initiatives in neuroscience (e.g. EEGLAB and > Chronux) that are based on MATLAB, while I am sure that by now everybody > (user/programmer/taxpayer) would be much better off if they used Python. There is definite interest. I know at least a few people who have been talking about working on this as well. I have to go to sleep now, but just wanted to voice my agreement. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From emanuele at relativita.com Sat Jan 12 09:33:48 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Sat, 12 Jan 2008 15:33:48 +0100 Subject: [SciPy-user] is there an equivalent of ICALAB ? In-Reply-To: <1200128669.6811.25.camel@gabriel-desktop> References: <47873252.9030504@ru.nl> <1200127655.6811.9.camel@gabriel-desktop> <1200128669.6811.25.camel@gabriel-desktop> Message-ID: <4788CFCC.9080404@relativita.com> Gabriel J.L. Beckers wrote: > And now that I am at it: if there are people interested in getting more > blind source separation (BSS) techniques into SciPy, we should organize > something to get this going. In neuroscience and other fields BSS/ICA is > becoming increasingly important. It is a pity to see that there are so > many good open source initiatives in neuroscience (e.g. EEGLAB and > Chronux) that are based on MATLAB, while I am sure that by now everybody > (user/programmer/taxpayer) would be much better off if they used Python. > > Gabriel > > Gabriel, it is definitely interesting to have ICA algorithms in Python. I work in the machine learning area and from now on my domain of application is neuroimaging and ICA is in my wishlist. MDP is a cool project but I'd suggest to put your code in scikits, too: http://scipy.org/scipy/scikits There is the best place where different ICA algorithms should reside, in my opinion. In future some scikits's code could merge into scipy. Emanuele From lorenzo.isella at gmail.com Sat Jan 12 10:43:32 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Sat, 12 Jan 2008 16:43:32 +0100 Subject: [SciPy-user] Finding Element Position and Sorting Message-ID: <4788E024.4000102@gmail.com> Dear All, Unfortunately I have not been able to find online what I was looking for. Say that you have N_ob objects labeled 0,1,...,N_ob-1 objects which belong to N_g groups labeled in turn with 0,1,...,N_g<=N_ob. You know how many objects each group is made up of. To keep it simple, consider 8 objects and 3 groups, and assume you know what group each object belongs to. For instance, the object vector is given by: Obj= [0 1 2 3 4 5 6 7] The vector giving the membership of each object to any of the groups 0,1 or 2 is: Mem=[1 1 0 2 2 0 2 1]. So, 2 objects in group 0, 3 in group 1 and 3 in group 2. Now, I am looking for an efficient way of building a vector getting together the identities of the objects in the 1st, 2nd and 3rd group, i.e.: Id=[2 5 | 0 1 7 | 3 4 6] 0 1 2 Any suggestions? Many thanks Lorenzo From wnbell at gmail.com Sat Jan 12 10:52:14 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 12 Jan 2008 09:52:14 -0600 Subject: [SciPy-user] Finding Element Position and Sorting In-Reply-To: <4788E024.4000102@gmail.com> References: <4788E024.4000102@gmail.com> Message-ID: On Jan 12, 2008 9:43 AM, Lorenzo Isella wrote: > For instance, the object vector is given by: > Obj= [0 1 2 3 4 5 6 7] > The vector giving the membership of each object to any of the groups 0,1 > or 2 is: > Mem=[1 1 0 2 2 0 2 1]. > So, 2 objects in group 0, 3 in group 1 and 3 in group 2. > Now, I am looking for an efficient way of building a vector getting > together the identities of the objects in the 1st, 2nd and 3rd group, i.e.: > Id=[2 5 | 0 1 7 | 3 4 6] > 0 1 2 Try argsort: from scipy import * Obj = array([0, 1, 2, 3, 4, 5, 6, 7]) Mem = array([1, 1, 0, 2, 2, 0, 2, 1]) print argsort(Mem) print Obj[argsort(Mem)] If you have multiple keys (Mem in your example), then you can use lexsort() -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From aisaac at american.edu Sat Jan 12 11:27:51 2008 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 12 Jan 2008 11:27:51 -0500 Subject: [SciPy-user] Finding Element Position and Sorting In-Reply-To: <4788E024.4000102@gmail.com> References: <4788E024.4000102@gmail.com> Message-ID: Using `argsort` was a good suggestion. More generally you can use a DSU pattern (e.g., below). (I mention this since you speak of unspecified "objects"; it is not as useful for this specific example.) Cheers, Alan Isaac >>> a = range(8) >>> m = [1,1,0,2,2,0,2,1] >>> from itertools import izip >>> ag1 = [y for x,y in sorted(izip(m,a))] >>> ag1 [2, 5, 0, 1, 7, 3, 4, 6] >>> import numpy as N >>> ag2 = N.fromiter((y for x,y in sorted(izip(m,a))),dtype='int') >>> ag2 array([2, 5, 0, 1, 7, 3, 4, 6]) From s.mientki at ru.nl Sat Jan 12 12:02:19 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 12 Jan 2008 18:02:19 +0100 Subject: [SciPy-user] is there an equivalent of ICALAB ? In-Reply-To: <4787E7F7.30308@relativita.com> References: <47873252.9030504@ru.nl> <4787E7F7.30308@relativita.com> Message-ID: <4788F29B.7020304@ru.nl> Emanuele Olivetti wrote: > Stef Mientki wrote: > >> hello, >> >> I would love to do some tests with ICALAB >> http://www.bsp.brain.riken.jp/ICALAB/ICALABSignalProc/ >> but don't want to go back to MatLab ;-) >> >> Is there something (similar) available in SciPy ? >> >> thanks, >> Stef Mientki >> >> >> > > If you are looking for ICA you can use MDP (no, it is not > Markov Decision Process :) but Modular toolkit for > Data Processing): > > http://mdp-toolkit.sourceforge.net/tutorial.html > > It is based on numpy. > > Here you can find a list of algorithms implemented, some > of them being ICA (CuBICA, FastICA): > > http://mdp-toolkit.sourceforge.net/tutorial.html#node-list > > thank you all guys, > It has not GUI. And I'm very happy of that. > I hope, you mean "I'm very happy that there's a non-GUI interface" ;-) At least for me the MatLab interface is far more easier to evaluate if these technics are of any value for my kind of signals. cheers, Stef Mientki > > Emanuele > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From roger.herikstad at gmail.com Sat Jan 12 19:46:22 2008 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Sun, 13 Jan 2008 08:46:22 +0800 Subject: [SciPy-user] Finding all combinations of numbers Message-ID: Hi all, I was wondering if there is a functions in scipy for generating all possible combinations of a set of numbers? E.g [1,2,3] -> [[1,2],[1,3],[2,3],[1,2,3]]. Basically I'm looking for a function that can generate all unique pairs, triplets, quadruples, etc, etc. Any thoughts? ~ Roger -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Sat Jan 12 22:56:18 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sat, 12 Jan 2008 22:56:18 -0500 Subject: [SciPy-user] Finding all combinations of numbers Message-ID: <20080113035636.6EBEA39C074@new.scipy.org> What you're looking for is called the power set: http://en.wikipedia.org/wiki/Power_set I don't know offhand if Scipy would include this (it seems a bit out of scope) but there is a simple recursive (and slightly more complcated iterative) algorithm for computing the power set, the pseudocode for which can be found easily on the web (article above is a good starting point). Note that strictly speaking the power set of S (set of all subsets of S) cobtains the empty set, and most algorithms for computing the power set rely on this, however it should be easy to remove the not so useful empty case once you're done. Cheers, DWF-----Original Message----- From: Roger Herikstad Sent: January 12, 2008 7:46 PM To: SciPy Users List Subject: [SciPy-user] Finding all combinations of numbers Hi all, I was wondering if there is a functions in scipy for generating all possible combinations of a set of numbers? E.g [1,2,3] -> [[1,2],[1,3],[2,3],[1,2,3]]. Basically I'm looking for a function that can generate all unique pairs, triplets, quadruples, etc, etc. Any thoughts? ~ Roger From dwf at cs.toronto.edu Sat Jan 12 23:15:30 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sat, 12 Jan 2008 23:15:30 -0500 Subject: [SciPy-user] Finding all combinations of numbers Message-ID: <20080113041523.601E139C105@new.scipy.org> Looks like I spoke too soon: SAGE -----Original Message----- From: Roger Herikstad Sent: January 12, 2008 7:46 PM To: SciPy Users List Subject: [SciPy-user] Finding all combinations of numbers Hi all, I was wondering if there is a functions in scipy for generating all possible combinations of a set of numbers? E.g [1,2,3] -> [[1,2],[1,3],[2,3],[1,2,3]]. Basically I'm looking for a function that can generate all unique pairs, triplets, quadruples, etc, etc. Any thoughts? ~ Roger From dwf at cs.toronto.edu Sat Jan 12 23:21:26 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sat, 12 Jan 2008 23:21:26 -0500 Subject: [SciPy-user] Finding all combinations of numbers Message-ID: <20080113042114.3103B39C105@new.scipy.org> Looks like I spoke too soon: SAGE apparently contains an implementation for any iterable object. http://sage.scipy.org/sage/doc/html/ref/module-sage.misc.misc.html Again you'll likely need to remove the empty list. (Apologies if a truncated version of this message appears, my hand slipped and I'm not sure I cancelled it in time) DWF -----Original Message----- From: Roger Herikstad Sent: January 12, 2008 7:46 PM To: SciPy Users List Subject: [SciPy-user] Finding all combinations of numbers Hi all, I was wondering if there is a functions in scipy for generating all possible combinations of a set of numbers? E.g [1,2,3] -> [[1,2],[1,3],[2,3],[1,2,3]]. Basically I'm looking for a function that can generate all unique pairs, triplets, quadruples, etc, etc. Any thoughts? ~ Roger From strawman at astraw.com Sun Jan 13 01:34:38 2008 From: strawman at astraw.com (Andrew Straw) Date: Sat, 12 Jan 2008 22:34:38 -0800 Subject: [SciPy-user] Finding all combinations of numbers In-Reply-To: <20080113035636.6EBEA39C074@new.scipy.org> References: <20080113035636.6EBEA39C074@new.scipy.org> Message-ID: <4789B0FE.3030901@astraw.com> In addition to what David Warde-Farley said, I also have this lurking in my code base: def setOfSubsets(L): """find all subsets of L from Alex Martelli: http://mail.python.org/pipermail/python-list/2001-January/067815.html """ N = len(L) return [ [ L[i] for i in range(N) if X & (1L< References: <20080113035636.6EBEA39C074@new.scipy.org> <4789B0FE.3030901@astraw.com> Message-ID: <2473DDF8-0C14-47DF-96FB-DD7BEDCC460E@cs.toronto.edu> On 13-Jan-08, at 1:34 AM, Andrew Straw wrote: > def setOfSubsets(L): > """find all subsets of L > > from Alex Martelli: > http://mail.python.org/pipermail/python-list/2001-January/067815.html > """ > N = len(L) > return [ [ L[i] for i in range(N) > if X & (1L< for X in range(2**N) ] Hi Andrew, That's a clever little snippet. :) Just to point out to Roger in case he wants to use it: if I read it correctly, it doesn't include L (which is, like the empty set, unwanted in practice) because range(x) gives you [0,1,...x-1]. So for the list [1,2,3] it will return all subsets of size 1 and size 2, plus the empty list. If you wanted to exclude the empty, replace range(2**N) with range(1,2**N), skipping the number 0. If you want it to include the entire list, make the upper limit 2**N+1. Cheers, DWF From dwf at cs.toronto.edu Sun Jan 13 02:12:16 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sun, 13 Jan 2008 02:12:16 -0500 Subject: [SciPy-user] Finding all combinations of numbers In-Reply-To: <2473DDF8-0C14-47DF-96FB-DD7BEDCC460E@cs.toronto.edu> References: <20080113035636.6EBEA39C074@new.scipy.org> <4789B0FE.3030901@astraw.com> <2473DDF8-0C14-47DF-96FB-DD7BEDCC460E@cs.toronto.edu> Message-ID: On 13-Jan-08, at 2:05 AM, David Warde-Farley wrote: > If you wanted to exclude the empty, replace range(2**N) with > range(1,2**N), skipping the number 0. If you want it to include the > entire list, make the upper limit 2**N+1. And once again, I'm double-replying. I goofed; 2**N is the right upper limit to include L itself, so to exclude the 'full' subset, use 2**N - 1. DWF From roger.herikstad at gmail.com Sun Jan 13 02:48:18 2008 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Sun, 13 Jan 2008 15:48:18 +0800 Subject: [SciPy-user] Finding all combinations of numbers In-Reply-To: References: <20080113035636.6EBEA39C074@new.scipy.org> <4789B0FE.3030901@astraw.com> <2473DDF8-0C14-47DF-96FB-DD7BEDCC460E@cs.toronto.edu> Message-ID: Thanks a lot guys! I can definitely use this.. : ) ~ Roger On Jan 13, 2008 3:12 PM, David Warde-Farley wrote: > On 13-Jan-08, at 2:05 AM, David Warde-Farley wrote: > > > If you wanted to exclude the empty, replace range(2**N) with > > range(1,2**N), skipping the number 0. If you want it to include the > > entire list, make the upper limit 2**N+1. > > And once again, I'm double-replying. I goofed; 2**N is the right upper > limit to include L itself, so to exclude the 'full' subset, use 2**N - > 1. > > DWF > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzo.isella at gmail.com Sun Jan 13 09:00:47 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Sun, 13 Jan 2008 15:00:47 +0100 Subject: [SciPy-user] How to delete unused array in SciPy Message-ID: Dear All, While using SciPy to postprocess some large datasets, I often run out of memory. I think that this could be improved if I was able to get rid of some large arrays I need only at some point in my calculations. How do I remove an array in Python? Apart from re-defining it as something "smaller", is there a way to free the memory which had been allocated for that array? Many thanks Lorenzo From gael.varoquaux at normalesup.org Sun Jan 13 09:05:38 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 13 Jan 2008 15:05:38 +0100 Subject: [SciPy-user] How to delete unused array in SciPy In-Reply-To: References: Message-ID: <20080113140538.GG24375@phare.normalesup.org> On Sun, Jan 13, 2008 at 03:00:47PM +0100, Lorenzo Isella wrote: > While using SciPy to postprocess some large datasets, I often run out > of memory. I think that this could be improved if I was able to get rid > of some large arrays I need only at some point in my calculations. How > do I remove an array in Python? Apart from re-defining it as something > "smaller", is there a way to free the memory which had been allocated > for that array? The manual way is to use the "del" operator: if a is the array you want to delete, "del a" will do want you want. The clean way of doing this is to use functions to limit the scope of the array a: if you define "a" in a function, when you exit the function, if no references to a out of the function have been passed, a is automatically garbage collected. One can't say it too many times: make your code modular, limit the size of your functions. This is good design to make it reusable/easy to modify, it it will improve your memory foot print in the cases you are interested in. HTH, Ga?l From aisaac at american.edu Sun Jan 13 09:44:40 2008 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 13 Jan 2008 09:44:40 -0500 Subject: [SciPy-user] Finding all combinations of numbers In-Reply-To: References: <20080113035636.6EBEA39C074@new.scipy.org><4789B0FE.3030901@astraw.com><2473DDF8-0C14-47DF-96FB-DD7BEDCC460E@cs.toronto.edu> Message-ID: Here's a powerset creator that should work for any iterable. It is easily changed to a generator, which might be wise if your iterables are of any length. def subsets(s): result = list( [] ) for xi in s: result += list( subset+[xi] for subset in result ) return result Cheers, Alan Isaac From timmichelsen at gmx-topmail.de Sun Jan 13 18:40:28 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Mon, 14 Jan 2008 00:40:28 +0100 Subject: [SciPy-user] flagging no data in timeseries In-Reply-To: <200801070030.05554.pgmdevlist@gmail.com> References: <200712201137.35658.pgmdevlist@gmail.com> <47816CF4.5010309@gmx-topmail.de> <200801070030.05554.pgmdevlist@gmail.com> Message-ID: <478AA16C.8040909@gmx-topmail.de> Hello again, >>>>>> myvalues_ts_hourly = masked_where(myvalues_ts_hourly , -999) > >> All I got as output was: >> > -- >> >> What does that mean? > > That it's masked. The '--' is the defulat way to display masked values. Check > the mask directly w/ > myvalues_ts_hourly.mask I did as you suggested. Now, all values are flagged/masked. Please give me a comment on my code below. I have also created a small sample data set further down this message. I would highly appreciate if you could give me a code example on how to read this data in with all -999 values masked as NoData and then create a timeseries object with it. Many thanks in advance. Kind regards, Timmie *** START CODE *** #!/usr/bin/env python # -*- coding: utf-8 -*- import numpy as N import maskedarray as MA import datetime import timeseries as TS from timeseries.lib.moving_funcs import mov_average_expw from timeseries import plotlib as TPL import pylab # project settings contains file names etc. import project_settings ### some functions ## plot functions def plot_series(series): """ do a simple plot on a time series input: a time series object """ fig = TPL.tsfigure() fsp = fig.add_tsplot(111) fsp.tsplot(series, '-') # comment for moving average only fsp.format_dateaxis() dates = series.dates quarter_starts = dates[dates.quarter != (dates-1).quarter] fsp.set_xticks(quarter_starts.tovalue()) fsp.grid() fsp.tsplot(series, '-', mov_average_expw(series, 40), 'r--') # comment for moving average only # fsp.tsplot(mov_average_expw(series, 40), 'r--') # uncomment for moving average only fsp.set_xlim(int(series.start_date), int(series.end_date)) pylab.show() pylab.savefig('myvalues.png') # pylab.savefig('myvalues_'+location+'.png') def csv_report(series, csv_file_name): """ write a CSV report for a series input timeseries object output file name """ csv_file = open(csv_file_name, 'w') strfmt = lambda x: '"'+str(x)+'"' #fmtfunc = [None, None, strfmt] # csvReport = TS.Report(series, fmtfunc=fmtfunc, mask_rep='#N/A', delim=',', fixed_width=False) csvReport = TS.Report(series, fmtfunc=None, delim=',', fixed_width=False) csvReport() # output to sys.stdout csvReport(output=csv_file) # output to file ### program code # load data into array data = N.loadtxt(project_settings.datafile, comments='#', delimiter='\t', converters=None, skiprows=2, usecols=None, unpack=False) # define start date D_hr_start = TS.Date(freq='HR', year=2001, month=5, day=10, hour=0) # subscript only desired data column desired_values = data[:,5] # create timeseries object desired_values_ts_hourly = TS.time_series(desired_values, start_date=D_hr_start) # mask NoData values (-999) desired_values_ts_hourly_masked = MA.masked_where( -999, desired_values_ts_hourly) # show masked values print desired_values_ts_hourly_masked.mask ### prepare simple reports # timeseries to use for report # report_series = desired_values_ts_hourly report_series = desired_values_ts_hourly_masked # output csv_report(report_series, project_settings.csvfile) *** END CODE *** *** START SAMPLE DATA *** date hour_of_day value 01.02.2004 1 247 01.02.2004 2 889 01.02.2004 3 914 01.02.2004 4 292 01.02.2004 5 183 01.02.2004 6 251 01.02.2004 7 953 01.02.2004 8 156 01.02.2004 9 991 01.02.2004 10 557 01.02.2004 11 581 01.02.2004 12 354 01.02.2004 13 485 01.02.2004 14 655 01.02.2004 15 -999 01.02.2004 16 -999 01.02.2004 17 -999 01.02.2004 18 744 01.02.2004 19 445 01.02.2004 20 374 01.02.2004 21 168 01.02.2004 22 995 01.02.2004 23 943 01.02.2004 24 326 02.02.2004 1 83.98 02.02.2004 2 302.26 02.02.2004 3 310.76 02.02.2004 4 -999 02.02.2004 5 62.22 02.02.2004 6 85.34 02.02.2004 7 324.02 02.02.2004 8 53.04 02.02.2004 9 336.94 02.02.2004 10 189.38 02.02.2004 11 197.54 02.02.2004 12 120.36 02.02.2004 13 164.9 02.02.2004 14 222.7 02.02.2004 15 34.74 02.02.2004 16 85.34 02.02.2004 17 53.04 02.02.2004 18 252.96 02.02.2004 19 151.3 02.02.2004 20 -999 02.02.2004 21 57.12 02.02.2004 22 338.3 02.02.2004 23 320.62 02.02.2004 24 110.84 *** END SAMPLE DATA *** From timmichelsen at gmx-topmail.de Sun Jan 13 18:44:20 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Mon, 14 Jan 2008 00:44:20 +0100 Subject: [SciPy-user] flagging no data in timeseries In-Reply-To: <200801070030.05554.pgmdevlist@gmail.com> References: <200712201137.35658.pgmdevlist@gmail.com> <47816CF4.5010309@gmx-topmail.de> <200801070030.05554.pgmdevlist@gmail.com> Message-ID: <478AA254.7040100@gmx-topmail.de> Hello again, >>>>>> myvalues_ts_hourly = masked_where(myvalues_ts_hourly , -999) > >> All I got as output was: >> > -- >> >> What does that mean? > > That it's masked. The '--' is the defulat way to display masked values. Check > the mask directly w/ > myvalues_ts_hourly.mask I did as you suggested. Now, all values are flagged/masked. Please give me a comment on my code below. I have also created a small sample data set further down this message. I would highly appreciate if you could give me a code example on how to read this data in with all -999 values masked as NoData and then create a timeseries object with it. Many thanks in advance. Kind regards, Timmie *** START CODE *** #!/usr/bin/env python # -*- coding: utf-8 -*- import numpy as N import maskedarray as MA import datetime import timeseries as TS from timeseries.lib.moving_funcs import mov_average_expw from timeseries import plotlib as TPL import pylab # project settings contains file names etc. import project_settings ### some functions ## plot functions def plot_series(series): """ do a simple plot on a time series input: a time series object """ fig = TPL.tsfigure() fsp = fig.add_tsplot(111) fsp.tsplot(series, '-') # comment for moving average only fsp.format_dateaxis() dates = series.dates quarter_starts = dates[dates.quarter != (dates-1).quarter] fsp.set_xticks(quarter_starts.tovalue()) fsp.grid() fsp.tsplot(series, '-', mov_average_expw(series, 40), 'r--') # comment for moving average only # fsp.tsplot(mov_average_expw(series, 40), 'r--') # uncomment for moving average only fsp.set_xlim(int(series.start_date), int(series.end_date)) pylab.show() pylab.savefig('myvalues.png') # pylab.savefig('myvalues_'+location+'.png') def csv_report(series, csv_file_name): """ write a CSV report for a series input timeseries object output file name """ csv_file = open(csv_file_name, 'w') strfmt = lambda x: '"'+str(x)+'"' #fmtfunc = [None, None, strfmt] # csvReport = TS.Report(series, fmtfunc=fmtfunc, mask_rep='#N/A', delim=',', fixed_width=False) csvReport = TS.Report(series, fmtfunc=None, delim=',', fixed_width=False) csvReport() # output to sys.stdout csvReport(output=csv_file) # output to file ### program code # load data into array data = N.loadtxt(project_settings.datafile, comments='#', delimiter='\t', converters=None, skiprows=2, usecols=None, unpack=False) # define start date D_hr_start = TS.Date(freq='HR', year=2001, month=5, day=10, hour=0) # subscript only desired data column desired_values = data[:,5] # create timeseries object desired_values_ts_hourly = TS.time_series(desired_values, start_date=D_hr_start) # mask NoData values (-999) desired_values_ts_hourly_masked = MA.masked_where( -999, desired_values_ts_hourly) # show masked values print desired_values_ts_hourly_masked.mask ### prepare simple reports # timeseries to use for report # report_series = desired_values_ts_hourly report_series = desired_values_ts_hourly_masked # output csv_report(report_series, project_settings.csvfile) *** END CODE *** *** START SAMPLE DATA *** date hour_of_day value 01.02.2004 1 247 01.02.2004 2 889 01.02.2004 3 914 01.02.2004 4 292 01.02.2004 5 183 01.02.2004 6 251 01.02.2004 7 953 01.02.2004 8 156 01.02.2004 9 991 01.02.2004 10 557 01.02.2004 11 581 01.02.2004 12 354 01.02.2004 13 485 01.02.2004 14 655 01.02.2004 15 -999 01.02.2004 16 -999 01.02.2004 17 -999 01.02.2004 18 744 01.02.2004 19 445 01.02.2004 20 374 01.02.2004 21 168 01.02.2004 22 995 01.02.2004 23 943 01.02.2004 24 326 02.02.2004 1 83.98 02.02.2004 2 302.26 02.02.2004 3 310.76 02.02.2004 4 -999 02.02.2004 5 62.22 02.02.2004 6 85.34 02.02.2004 7 324.02 02.02.2004 8 53.04 02.02.2004 9 336.94 02.02.2004 10 189.38 02.02.2004 11 197.54 02.02.2004 12 120.36 02.02.2004 13 164.9 02.02.2004 14 222.7 02.02.2004 15 34.74 02.02.2004 16 85.34 02.02.2004 17 53.04 02.02.2004 18 252.96 02.02.2004 19 151.3 02.02.2004 20 -999 02.02.2004 21 57.12 02.02.2004 22 338.3 02.02.2004 23 320.62 02.02.2004 24 110.84 *** END SAMPLE DATA *** -- Operating systems Ubuntu Linux 2.6.22-14-generic #1 SMP Tue Dec 18 08:02:57 UTC 2007 i686 GNU/Linux *** more infos ii grass 6.2.3-1lesejk1 Geographic Resources Analysis Support System ii grass-doc 6.2.3-1lesejk1 Geographic Resources Analysis Support System ii libgdal1-1.4.0-grass 1.4.1-1ubuntu1 GRASS extension for the Geospatial Data Abst rc libgrass 6.0.2-2.1ubuntu1 GRASS GIS development libraries ii qgis-plugin-grass 0.9.0-1 Plugin for accessing GRASS data from QGIS ii libqgis0 0.8.1-1 QGIS Geographic Information System - shared ii libqgis1 0.9.0-1 QGIS Geographic Information System - shared ii qgis 0.9.0-1 Geographic Information System (GIS) ii qgis-plugin-grass 0.9.0-1 Plugin for accessing GRASS data from QGIS ii lyx 1.5.1-2ubuntu1 Document Processor ii lyx-common 1.5.1-2ubuntu1 Architecture-independent files for LyX *** and as dualboot Windows XP Home Edition From timmichelsen at gmx-topmail.de Sun Jan 13 19:08:09 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Mon, 14 Jan 2008 01:08:09 +0100 Subject: [SciPy-user] assignment of hours of day in time series In-Reply-To: References: <47816C69.3090007@gmx-topmail.de> Message-ID: <478AA7E9.2090906@gmx-topmail.de> Hi! > sorry for not getting back to you earlier. No problem. I am also only occasionally able to continue to explore further... > Since the time "24:00" doesn't actually exist (as far as I > am aware anyway), I think different disciplines in science have different measurements and data. I think as far as geoscience and climate science is concerned this hour could exist, at least virtually. Example: A anemometer is connected to a logging device which is set to take one record every 10min. This can either be a punctual value or a average of the last 10min. before the value was recorded. The raw data from the logger will contain: 23:00:00, 2 23:10:00, 2,3 23:20:00, 2,4 23:30:00, 2,6 23:40:00, 2,6 23:50:00, 2 00:00:00, 1,8 Some data providers use to attribute these measurements to the following hour when converted to hourly frequency: 24. The data point 14.01.2008 24 would then contain all values from 23:00 to 00:00. The next data point would be 15.01.2008 01 which contains all values from 00:00 to 1:00. To get such data well into a time series object I would have to define start_dat = TS.Date(freq='HR', year=2008, month=1, day=14, hour=0) and not start_dat = TS.Date(freq='HR', year=2008, month=1, day=14, hour=1). Is it now a little bit more understandable why I asked for the 1-24 format? > you will have to rely on somewhat of a hack to get your > desired output. Try this: > >>>> import timeseries as ts >>>> series = ts.time_series(range(400, 430), start_date=ts.now('hourly')) >>>> hours = ts.time_series(series.hour + 1, dates=series.dates) >>>> hour_fmtfunc = lambda x : '%i:00' % x > ts.Report(hours, series, datefmt='%d-%b-%Y', delim=' ', > fmtfunc=[hour_fmtfunc, None])() > 06-Jan-2008 23:00 400 > 06-Jan-2008 24:00 401 > 07-Jan-2008 1:00 402 > 07-Jan-2008 2:00 403 > 07-Jan-2008 3:00 404 > 07-Jan-2008 4:00 405 > 07-Jan-2008 5:00 406 > 07-Jan-2008 6:00 407 > 07-Jan-2008 7:00 408 > 07-Jan-2008 8:00 409 > 07-Jan-2008 9:00 410 > 07-Jan-2008 10:00 411 > 07-Jan-2008 11:00 412 > 07-Jan-2008 12:00 413 > 07-Jan-2008 13:00 414 > 07-Jan-2008 14:00 415 > 07-Jan-2008 15:00 416 > 07-Jan-2008 16:00 417 > 07-Jan-2008 17:00 418 > 07-Jan-2008 18:00 419 > 07-Jan-2008 19:00 420 > 07-Jan-2008 20:00 421 > 07-Jan-2008 21:00 422 > 07-Jan-2008 22:00 423 > 07-Jan-2008 23:00 424 > 07-Jan-2008 24:00 425 > 08-Jan-2008 1:00 426 > 08-Jan-2008 2:00 427 > 08-Jan-2008 3:00 428 > 08-Jan-2008 4:00 429 > > ================================= > > basically add one to the "hour" property of the series to get your desired > output. Thanks, I will do as you suggested. Nevertheless, we should think if we you implement feature into the package that does take care for all this. like a function that sets the hours of a day from 1 to 24 instead of 0 to 23. Kind regards, Timmie P.S.: Once I get my data in and these questions solved I'd like to add this to the wiki. Shall I then open a page Timeseries/FAQ or Timeseries/Recipies? From alan at ajackson.org Sun Jan 13 21:40:45 2008 From: alan at ajackson.org (Alan Jackson) Date: Sun, 13 Jan 2008 20:40:45 -0600 Subject: [SciPy-user] weave.inline question - passing an array of an array Message-ID: <20080113204045.73431d61@nova.oplnk.net> I have a bit of weave.inline code I have been quite happy with, but I now need to extend it. Essentially I need to replace a spot where I multiply some elements of an array by elements of a matrix, with a weighted sum of the same multiplies, where I can now have an arbitrary number of vectors and corresponding matrices. float Score = Matrix[s1[idx1]*NMatrix[1] + s2[idx2]]; becomes float Score = 0.0 for (unsigned int alphaidx = 0; alphaidx I am trying to write some code that needs to integrate orbits of asteroids forward and backwards in time. Is there some python interface out there to a runga-kutta 5th order integrator or a Bulirsch-Stoer integrator? Cheers Tommy From mjakubik at ta3.sk Mon Jan 14 10:22:37 2008 From: mjakubik at ta3.sk (Marian Jakubik) Date: Mon, 14 Jan 2008 16:22:37 +0100 Subject: [SciPy-user] Integrating In-Reply-To: References: Message-ID: <20080114162237.74c586f2@jakubik.ta3.sk> Hi Tommy, I am an astronomer (researcher) and for the job you mentioned I use the SWIFT package (written by Levison et. al). I think you know this package so my question is - "are you working on some Python-code for numerical integration of the small-bodies orbits?" I am very interested in it... Please contact me (on my email) and we could debate about this topic from the "astro-view", if you like... Best, Marian D?a Mon, 14 Jan 2008 09:57:33 -0500 Tommy Grav nap?sal: > I am trying to write some code that needs to integrate orbits of > asteroids > forward and backwards in time. Is there some python interface out there > to a runga-kutta 5th order integrator or a Bulirsch-Stoer integrator? > > Cheers > Tommy > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From tgrav at mac.com Mon Jan 14 10:48:52 2008 From: tgrav at mac.com (Tommy Grav) Date: Mon, 14 Jan 2008 10:48:52 -0500 Subject: [SciPy-user] Integrating In-Reply-To: <20080114162237.74c586f2@jakubik.ta3.sk> References: <20080114162237.74c586f2@jakubik.ta3.sk> Message-ID: On Jan 14, 2008, at 10:22 AM, Marian Jakubik wrote: > Hi Tommy, > > I am an astronomer (researcher) and for the job you mentioned I use > the > SWIFT package (written by Levison et. al). I think you know this > package so my question is - "are you working on some Python-code for > numerical integration of the small-bodies orbits?" I am very > interested > in it... I have used the SWIFT package yes. For the problem I have before me it is a little overkill :). I need to run a lot of small integrations with a lot of input/output of orbits and plotting. Since code development is much faster in python I am mostly my coding in that these days, as it is significantly easier to use in handling input/output of text files. Since the integrations are a a little slow in pure python to be efficient I was looking for an interface to some C/C++ integrator to speed that part up :) I am working on some astronomy python code. Mostly orbital mechanics with orbital determination and integration. The code is in a very early stage of development and has a tendency of undergoing major changes in the underlaying class structure as I add new functionality. > Please contact me (on my email) and we could debate about this > topic from the "astro-view", if you like... I have sent my reply to the list as well, in the hope that some other orbital mechanics will be interested and chime in. Cheers Tommy +----------------------------------------------------------------------- ------------------------------------------+ Associate Research Scientist Dept. of Physics and Astronomy Johns Hopkins University Bloomberg 243 tgrav at pha.jhu.edu 3400 N. Charles St. (410) 516-7683 Baltimore, MD21218 +----------------------------------------------------------------------- ------------------------------------------+ From lechtlr at yahoo.com Mon Jan 14 10:52:10 2008 From: lechtlr at yahoo.com (lechtlr) Date: Mon, 14 Jan 2008 07:52:10 -0800 (PST) Subject: [SciPy-user] Help with py2exe Message-ID: <262542.38636.qm@web57911.mail.re3.yahoo.com> I try to create an executable for windows using py2exe from a python script that import scipy. I get the following info on dependencies when I build the executable, and error on numpy/scipy when I run the executable (Test.exe). Any help you would greatly be appreciated. I am using python 2.4.2. Thanks, Lex C:\Python_Exe>python setup.py py2exe *** binary dependencies *** Your executable(s) also depend on these dlls which are not included, you may or may not need to distribute them. Make sure you have the license if you distribute any of them, and make sure you don't distribute files belonging to the operating system. ole32.dll - C:\WINDOWS\system32\ole32.dll OLEAUT32.dll - C:\WINDOWS\system32\OLEAUT32.dll USER32.dll - C:\WINDOWS\system32\USER32.dll SHELL32.dll - C:\WINDOWS\system32\SHELL32.dll KERNEL32.dll - C:\WINDOWS\system32\KERNEL32.dll WSOCK32.dll - C:\WINDOWS\system32\WSOCK32.dll ADVAPI32.dll - C:\WINDOWS\system32\ADVAPI32.dll msvcrt.dll - C:\WINDOWS\system32\msvcrt.dll WS2_32.dll - C:\WINDOWS\system32\WS2_32.dll MSVCP71.dll - C:\WINDOWS\system32\MSVCP71.dll multiarray.pyd - C:\Python24\lib\site-packages\numpy\core\multiarray.pyd VERSION.dll - C:\WINDOWS\system32\VERSION.dll _operator.pyd - C:\Python24\lib\site-packages\numarray\_operator.pyd C:\Python_Exe\dist>Test.exe No scipy-style subpackage 'core' found in C:\Python_Exe\dist\library.zip\numpy. Ignoring: cannot import name typeinfo No scipy-style subpackage 'lib' found in C:\Python_Exe\dist\library.zip\numpy. Ignoring: cannot import name typeinfo No scipy-style subpackage 'linalg' found in C:\Python_Exe\dist\library.zip\numpy. Ignoring: cannot import name typeinfo No scipy-style subpackage 'dft' found in C:\Python_Exe\dist\library.zip\numpy. Ignoring: cannot import name typeinfo Traceback (most recent call last): File "Test.py", line 6, in ? File "scipy\__init__.pyc", line 25, in ? File "numpy\__init__.pyc", line 35, in ? File "numpy\_import_tools.pyc", line 173, in __call__ File "numpy\_import_tools.pyc", line 68, in _init_info_modules File "", line 1, in ? File "numpy\random\__init__.pyc", line 3, in ? File "numpy\random\mtrand.pyc", line 12, in ? File "numpy\random\mtrand.pyc", line 10, in __load File "numpy.pxi", line 32, in mtrand AttributeError: 'module' object has no attribute 'dtype' --------------------------------- Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.clewley at gmail.com Mon Jan 14 11:03:32 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 14 Jan 2008 11:03:32 -0500 Subject: [SciPy-user] Integrating In-Reply-To: References: <20080114162237.74c586f2@jakubik.ta3.sk> Message-ID: Hi Tommy, I am not sure how strict your requirement is for 5th order exactly, but our PyDSTool package (pydstool.sourceforge.net) has interfaces both to the 5th order Radau integrator (a good integrator for stiff systems and those with algebraic constraints) and the Dormand-Prince 8th order multi-stage (8-5-3) integrator. Both codes were developed in the 80s by Hairer & Wanner, renowned experts who have written a couple of Springer books about these. Radau is in Fortran and Dopri is in C. Our package lets you write your vector field definitions in pseudo-code from inside a python script, which gets automatically converted to C code for linking to the integrator object files. This results in very fast integration! Many python interfaces to C integrators only support python vector field definitions, creating an additional overhead in the frequent callback to those functions. BTW we also support C-level accurate detection of discrete events like zero crossings. Feel free to contact me if you have further questions. -Rob On 14/01/2008, Tommy Grav wrote: > > On Jan 14, 2008, at 10:22 AM, Marian Jakubik wrote: > > > Hi Tommy, > > > > I am an astronomer (researcher) and for the job you mentioned I use > > the > > SWIFT package (written by Levison et. al). I think you know this > > package so my question is - "are you working on some Python-code for > > numerical integration of the small-bodies orbits?" I am very > > interested > > in it... > > I have used the SWIFT package yes. For the problem I have before me it > is a little overkill :). I need to run a lot of small integrations > with a lot of > input/output of orbits and plotting. Since code development is much > faster > in python I am mostly my coding in that these days, as it is > significantly > easier to use in handling input/output of text files. Since the > integrations > are a a little slow in pure python to be efficient I was looking for an > interface to some C/C++ integrator to speed that part up :) > > I am working on some astronomy python code. Mostly orbital mechanics > with orbital determination and integration. The code is in a very early > stage of development and has a tendency of undergoing major changes > in the underlaying class structure as I add new functionality. > > > Please contact me (on my email) and we could debate about this > > topic from the "astro-view", if you like... > > I have sent my reply to the list as well, in the hope that some other > orbital mechanics will be interested and chime in. > > Cheers > Tommy > > +----------------------------------------------------------------------- > ------------------------------------------+ > Associate Research Scientist Dept. of Physics and Astronomy > Johns Hopkins University Bloomberg 243 > tgrav at pha.jhu.edu 3400 N. Charles St. > (410) 516-7683 Baltimore, MD21218 > +----------------------------------------------------------------------- > ------------------------------------------+ > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Robert H. Clewley, Ph. D. Assistant Professor Department of Mathematics and Statistics Georgia State University 720 COE, 30 Pryor St Atlanta, GA 30303, USA tel: 404-413-6420 fax: 404-651-2246 http://www.mathstat.gsu.edu/~matrhc http://brainsbehavior.gsu.edu/ From peridot.faceted at gmail.com Mon Jan 14 11:17:46 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 14 Jan 2008 11:17:46 -0500 Subject: [SciPy-user] Integrating In-Reply-To: References: <20080114162237.74c586f2@jakubik.ta3.sk> Message-ID: On 14/01/2008, Tommy Grav wrote: > > I have used the SWIFT package yes. For the problem I have before me it > is a little overkill :). I need to run a lot of small integrations > with a lot of > input/output of orbits and plotting. Since code development is much > faster > in python I am mostly my coding in that these days, as it is > significantly > easier to use in handling input/output of text files. Since the > integrations > are a a little slow in pure python to be efficient I was looking for an > interface to some C/C++ integrator to speed that part up :) > > I am working on some astronomy python code. Mostly orbital mechanics > with orbital determination and integration. The code is in a very early > stage of development and has a tendency of undergoing major changes > in the underlaying class structure as I add new functionality. If your requirements are really mild, scipy has odeint. This is based on the routine LSODA from ODEPACK, which I believe is a Runge-Kutta 4/5 integrator. Its interface is a bit limited, but it's quick and easy and requires no additional packages. scipy also has an integrator based on VODE, which I know even less about. Anne From stefan at sun.ac.za Mon Jan 14 11:44:17 2008 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 14 Jan 2008 18:44:17 +0200 Subject: [SciPy-user] Integrating In-Reply-To: References: Message-ID: <20080114164417.GB2772@mentat.za.net> Hi Tommy The attached code is a very light-weight (4th order) Runge-Kutta integrator, that I sometimes use for stepping along an integral. Regards St?fan On Mon, Jan 14, 2008 at 09:57:33AM -0500, Tommy Grav wrote: > I am trying to write some code that needs to integrate orbits of > asteroids > forward and backwards in time. Is there some python interface out there > to a runga-kutta 5th order integrator or a Bulirsch-Stoer integrator? -------------- next part -------------- A non-text attachment was scrubbed... Name: ode.py Type: text/x-python Size: 1588 bytes Desc: not available URL: From hasslerjc at comcast.net Mon Jan 14 13:40:10 2008 From: hasslerjc at comcast.net (John Hassler) Date: Mon, 14 Jan 2008 13:40:10 -0500 Subject: [SciPy-user] Integrating In-Reply-To: References: <20080114162237.74c586f2@jakubik.ta3.sk> Message-ID: <478BAC8A.9060101@comcast.net> An HTML attachment was scrubbed... URL: From Karl.Young at ucsf.edu Mon Jan 14 17:45:49 2008 From: Karl.Young at ucsf.edu (Karl Young) Date: Mon, 14 Jan 2008 14:45:49 -0800 Subject: [SciPy-user] Bayes net question In-Reply-To: References: <20080114162237.74c586f2@jakubik.ta3.sk> Message-ID: <478BE61D.9090309@ucsf.edu> I'm starting to play with Bayes nets in a way that will require a little more than just using some of the black box packages around (e.g. I'd like to play around with using various regression models at the nodes) and would love to do my exploring in the context of SciPy but I didn't see any such packages currently available. I did find a python package called OpenBayes (http://www.openbayes.org/) that after a very cursory examination looked pretty nice but apparently is no longer being developed. Does anyone know if there has ever been any discussion with the author of that package re. incorporating it into SciPy ? -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From millman at berkeley.edu Mon Jan 14 19:09:10 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 14 Jan 2008 16:09:10 -0800 Subject: [SciPy-user] Bayes net question In-Reply-To: <478BE61D.9090309@ucsf.edu> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> Message-ID: On Jan 14, 2008 2:45 PM, Karl Young wrote: > I'm starting to play with Bayes nets in a way that will require a little > more than just using some of the black box packages around (e.g. I'd > like to play around with using various regression models at the nodes) > and would love to do my exploring in the context of SciPy but I didn't > see any such packages currently available. I did find a python package > called OpenBayes (http://www.openbayes.org/) that after a very cursory > examination looked pretty nice but apparently is no longer being > developed. Does anyone know if there has ever been any discussion with > the author of that package re. incorporating it into SciPy ? I am somewhat familiar with the project. Although after taking a quick look at it just now, it appears to have a little different history than I remember. I thought Elliot Cohen was the original author, but I guess his project must have been incorporated into to this project. Elliot developed a Python Bayesian Network Toolbox as part of the first Summer of Code in 1995: http://pbnt.berlios.de/ James Tauber was his mentor. Elliot was a student at Berkeley and was actually Matthew Brett's research assistant, so Matthew and I know him pretty well. Elliot works for Microsoft now and is also heavily involved in a personal side-project, so I don't think he has much free time. If you think this is worth turning into a scikit, Matthew and I would be happy to talk to him about whether he would be interested. If we can get his interest, I am sure that he would talk to the other developers. If anyone else is interested in seeing this happen, please respond. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From Karl.Young at ucsf.edu Mon Jan 14 18:53:12 2008 From: Karl.Young at ucsf.edu (Karl Young) Date: Mon, 14 Jan 2008 15:53:12 -0800 Subject: [SciPy-user] Bayes net question In-Reply-To: References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> Message-ID: <478BF5E8.9060405@ucsf.edu> >On Jan 14, 2008 2:45 PM, Karl Young wrote: > > >>I'm starting to play with Bayes nets in a way that will require a little >>more than just using some of the black box packages around (e.g. I'd >>like to play around with using various regression models at the nodes) >>and would love to do my exploring in the context of SciPy but I didn't >>see any such packages currently available. I did find a python package >>called OpenBayes (http://www.openbayes.org/) that after a very cursory >>examination looked pretty nice but apparently is no longer being >>developed. Does anyone know if there has ever been any discussion with >>the author of that package re. incorporating it into SciPy ? >> >> > >I am somewhat familiar with the project. Although after taking a >quick look at it just now, it appears to have a little different >history than I remember. I thought Elliot Cohen was the original >author, but I guess his project must have been incorporated into to >this project. Elliot developed a Python Bayesian Network Toolbox as >part of the first Summer of Code in 1995: http://pbnt.berlios.de/ >James Tauber was his mentor. Elliot was a student at Berkeley and was >actually Matthew Brett's research assistant, so Matthew and I know him >pretty well. Elliot works for Microsoft now and is also heavily >involved in a personal side-project, so I don't think he has much free >time. > >If you think this is worth turning into a scikit, Matthew and I would >be happy to talk to him about whether he would be interested. If we >can get his interest, I am sure that he would talk to the other >developers. > >If anyone else is interested in seeing this happen, please respond. > > > From the little I've seen of the package it looks to me like it would be worth turing into a scikit and I would be willing to contribute some time to such an effort. But it would be nice to get input from any Bayes net experts and/or anyone who has used this package re. what they think of the code and the approach taken. -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From dwf at cs.toronto.edu Mon Jan 14 19:43:45 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 14 Jan 2008 19:43:45 -0500 Subject: [SciPy-user] Bayes net question In-Reply-To: <478BF5E8.9060405@ucsf.edu> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <478BF5E8.9060405@ucsf.edu> Message-ID: <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> On 14-Jan-08, at 6:53 PM, Karl Young wrote: > From the little I've seen of the package it looks to me like it would > be worth turing into a scikit and I would be willing to contribute > some > time to such an effort. But it would be nice to get input from any > Bayes > net experts and/or anyone who has used this package re. what they > think > of the code and the approach taken. On a superficial reading it looks like it might be the beginning of something good, though it looks as though it's been abandoned in its fairly early stages -- the restriction to discrete variables limits it's usefulness in many areas. (Not to criticize Elliot's work, a general purpose Bayes net toolbox is a lofty goal and he seems to have made significant progress). I've heard much praise for Kevin Murphy's toolbox for Matlab ( http:// www.cs.ubc.ca/~murphyk/Software/BNT/bnt.html ), and this might be helpful in extending PBNT or starting another Bayes net Python library. Since it is released under the LGPL, any straightforward port of Kevin's code to Python would likely need to live in scigpl. David From pkienzle at nist.gov Tue Jan 15 07:07:26 2008 From: pkienzle at nist.gov (Paul Kienzle) Date: Tue, 15 Jan 2008 07:07:26 -0500 Subject: [SciPy-user] numpy array in ctype struct Message-ID: <20080115070726.A460060@jazz.ncnr.nist.gov> Hi, We are trying to create the following struct in ctypes: struct { int n,k; double A[4][4], B[4][4]; } ; Easy enough: class embedded_array(Structure): _fields_ = [("n",c_int),("k",c_int),("A",c_double*16),("B",c_double*16)] instance = embedded_array() Question: Is there a way to map the data in A/B into a numpy array, so that we can use it directly? Alternatively, is there a way to quickly copy data from a numpy array into/out of the ctypes structure? Thanks in advance, - Paul From oliphant at enthought.com Tue Jan 15 09:59:06 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 15 Jan 2008 08:59:06 -0600 Subject: [SciPy-user] numpy array in ctype struct In-Reply-To: <20080115070726.A460060@jazz.ncnr.nist.gov> References: <20080115070726.A460060@jazz.ncnr.nist.gov> Message-ID: <478CCA3A.5060505@enthought.com> Paul Kienzle wrote: > Hi, > > We are trying to create the following struct in ctypes: > > struct { > int n,k; > double A[4][4], B[4][4]; > } ; > > Easy enough: > > class embedded_array(Structure): > _fields_ = [("n",c_int),("k",c_int),("A",c_double*16),("B",c_double*16)] > instance = embedded_array() > > Question: > > Is there a way to map the data in A/B into a numpy array, so that we > can use it directly? > > Yes. You can create an array from a pointer to memory and a description of the data that is much like the ctypes structure. At some point, there should be a direct conversion from ctypes objects to NumPy data-types, but nobody has written that yet. At the moment you have to create a dtype object that is parallel to the ctypes one: import numpy as np dt = np.dtype([('n',np.intc),('k',np.intc),("A", float, 16), ("B", float, 16)]) Then, arr = np.frombuffer(instance, dtype=dt) will create a 1-d array of these structures mapping onto the data pointed to by instance. Of course, in this case, the 1-d array only has one element. Access to the fields of the c-structure is obtained by "dictionary" access: arr['n'] = 10 arr["A"] = 1 You will notice that this accesses the memory directly: print instance.n print [x for x in instance.A] Ask if you need more help. Best regards, -Travis O. From dmitrey.kroshko at scipy.org Tue Jan 15 11:08:21 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Tue, 15 Jan 2008 18:08:21 +0200 Subject: [SciPy-user] [optimization]OpenOpt TODO list Message-ID: <478CDA75.70403@scipy.org> Hi all, let me remember you that openopt is free Python-based BSD-licensed alternative to commercial AMPL, GAMS, TOMOPT/TOMNET. http://scipy.org/scipy/scikits/wiki/OpenOpt Anyone is welcome to connect his solver(s) in terms of any license(s) (opensource and, moreover, OSI-approved are much more welcome). Let me also note: information of solver authors, homepage, license, algorithm is stored in output structure, for example >>> r.solverInfo {'alg': 'Augmented Lagrangian Multipliers', 'homepage': 'http:// www.ime.usp.br/~egbirgin/tango/', 'license': 'GPL', 'authors': 'J. M. Martinez martinezimecc-at-gmail.com, Ernesto G. Birgin egbirgin-at- ime.usp.br, Jan Marcel Paiva Gentil jgmarcel-at-ime.usp.br'} Also, you may be interested in OpenOpt TODO list (created / 2008-Jan-15/): http://scipy.org/scipy/scikits/wiki/OO_TODO#OpenOptTODOlist Regards, Dmitrey From theller at ctypes.org Tue Jan 15 11:29:31 2008 From: theller at ctypes.org (Thomas Heller) Date: Tue, 15 Jan 2008 17:29:31 +0100 Subject: [SciPy-user] numpy array in ctype struct In-Reply-To: <478CCA3A.5060505@enthought.com> References: <20080115070726.A460060@jazz.ncnr.nist.gov> <478CCA3A.5060505@enthought.com> Message-ID: Travis E. Oliphant schrieb: > Paul Kienzle wrote: >> Hi, >> >> We are trying to create the following struct in ctypes: >> >> struct { >> int n,k; >> double A[4][4], B[4][4]; >> } ; >> >> Easy enough: >> >> class embedded_array(Structure): >> _fields_ = [("n",c_int),("k",c_int),("A",c_double*16),("B",c_double*16)] >> instance = embedded_array() >> >> Question: >> >> Is there a way to map the data in A/B into a numpy array, so that we >> can use it directly? >> >> > Yes. You can create an array from a pointer to memory and a > description of the data that is much like the ctypes structure. At > some point, there should be a direct conversion from ctypes objects to > NumPy data-types, but nobody has written that yet. [...] Is there anything that should be added to ctypes (Python 2.6 isn't too far away) that would help with this conversion? Thanks, Thomas From oliphant at enthought.com Tue Jan 15 12:56:44 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 15 Jan 2008 11:56:44 -0600 Subject: [SciPy-user] numpy array in ctype struct In-Reply-To: References: <20080115070726.A460060@jazz.ncnr.nist.gov> <478CCA3A.5060505@enthought.com> Message-ID: <478CF3DC.2030407@enthought.com> Thomas Heller wrote: > Travis E. Oliphant schrieb: > >> Paul Kienzle wrote: >> >>> Hi, >>> >>> We are trying to create the following struct in ctypes: >>> >>> struct { >>> int n,k; >>> double A[4][4], B[4][4]; >>> } ; >>> >>> Easy enough: >>> >>> class embedded_array(Structure): >>> _fields_ = [("n",c_int),("k",c_int),("A",c_double*16),("B",c_double*16)] >>> instance = embedded_array() >>> >>> Question: >>> >>> Is there a way to map the data in A/B into a numpy array, so that we >>> can use it directly? >>> >>> >>> >> Yes. You can create an array from a pointer to memory and a >> description of the data that is much like the ctypes structure. At >> some point, there should be a direct conversion from ctypes objects to >> NumPy data-types, but nobody has written that yet. >> > > [...] > > Is there anything that should be added to ctypes (Python 2.6 isn't too > far away) that would help with this conversion? > One approach would for ctypes objects to describe their data-type using the new additions to the struct syntax (i.e. fully support the new buffer interface including data-type description as defined in PEP 3118). This should be going in Python 2.6 as well (but I need help in the implementation). Then, NumPy arrays will support consuming the syntax as well and it should be an easy thing to pass back and forth between NumPy array views and ctypes-objects views of memory. I think such a thing would be very powerful. -Travis O. From pkienzle at nist.gov Tue Jan 15 13:04:29 2008 From: pkienzle at nist.gov (Paul Kienzle) Date: Tue, 15 Jan 2008 13:04:29 -0500 Subject: [SciPy-user] numpy array in ctype struct In-Reply-To: <478CCA3A.5060505@enthought.com>; from oliphant@enthought.com on Tue, Jan 15, 2008 at 08:59:06AM -0600 References: <20080115070726.A460060@jazz.ncnr.nist.gov> <478CCA3A.5060505@enthought.com> Message-ID: <20080115130429.F462016@jazz.ncnr.nist.gov> On Tue, Jan 15, 2008 at 08:59:06AM -0600, Travis E. Oliphant wrote: > Paul Kienzle wrote: > > Hi, > > > > We are trying to create the following struct in ctypes: > > > > struct { > > int n,k; > > double A[4][4], B[4][4]; > > } ; > > > > Easy enough: > > > > class embedded_array(Structure): > > _fields_ = [("n",c_int),("k",c_int),("A",c_double*16),("B",c_double*16)] > > instance = embedded_array() > > > > Question: > > > > Is there a way to map the data in A/B into a numpy array, so that we > > can use it directly? > > > > > Yes. You can create an array from a pointer to memory and a > description of the data that is much like the ctypes structure. At > some point, there should be a direct conversion from ctypes objects to > NumPy data-types, but nobody has written that yet. At the moment you > have to create a dtype object that is parallel to the ctypes one: > > import numpy as np > dt = np.dtype([('n',np.intc),('k',np.intc),("A", float, 16), ("B", > float, 16)]) > > Then, > > arr = np.frombuffer(instance, dtype=dt) > > will create a 1-d array of these structures mapping onto the data > pointed to by instance. Of course, in this case, the 1-d array only > has one element. > > Access to the fields of the c-structure is obtained by "dictionary" access: > > arr['n'] = 10 > arr["A"] = 1 > > You will notice that this accesses the memory directly: > > print instance.n > print [x for x in instance.A] Instead of creating the ctypes struct I just used numpy to create a scalar of the correct structure. I wrapped it in a class so that I could reference the fields directly as e.g., instance.A. I used a factory function for generating the class from the structure definition since I will need to wrap several structures. I'm attaching cstruct.py and embedarray.c where I demonstrate this. I'm particularly pleased that I can assign a 4x4 array to instance.A and it just works! The ctypes docs talk about possible alignment issues for the structs on some architectures. They say it the ctypes follows the conventions of the compiler which created it. I haven't checked if this will be a problem with the numpy solution you outline above. - Paul -------------- next part -------------- typedef struct { int n, k; double A[4][4], B[4][4]; } embed_array; void call(embed_array *pv) { pv->n = 5; pv->A[1][1] = 18.652; } -------------- next part -------------- import ctypes, numpy, os # Factory for ctypes/numpy struct classes def cstruct(*fields): class SpecificStruct(object): """ C structure usable from python. Use a._pointer to pass the data as a pointer to structure in ctypes. """ _dtype = numpy.dtype(list(fields)) def __init__(self,**kw): self.__dict__['_array'] = numpy.zeros((),dtype=self._dtype) for k,v in kw.iteritems(): self._array[k] = v def _getdata(self): return self._array.ctypes.data _pointer = property(_getdata,doc='ctypes data pointer to struct') def __getattr__(self,field): return self._array[field] def __setattr__(self,field,value): self._array[field] = value return SpecificStruct if __name__ == "__main__": # Create a structure for: # struct { # int n, k; # double A[4][4],B[4][4]; # } Embed = cstruct(('n',numpy.intc),('k',numpy.intc), ('A',float,(4,4)),('B',float,(4,4))) # Initialize the structure and invoke call() in embedarray.dll a = Embed(k=6) a.A = numpy.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]) a.A[0,0] = 12 lib = numpy.ctypeslib.load_library('embedarray',os.path.dirname(__file__)) lib.call(a._pointer) print a.A,a.k,a.n From theller at ctypes.org Tue Jan 15 13:46:45 2008 From: theller at ctypes.org (Thomas Heller) Date: Tue, 15 Jan 2008 19:46:45 +0100 Subject: [SciPy-user] numpy array in ctype struct In-Reply-To: <478CF3DC.2030407@enthought.com> References: <20080115070726.A460060@jazz.ncnr.nist.gov> <478CCA3A.5060505@enthought.com> <478CF3DC.2030407@enthought.com> Message-ID: Travis E. Oliphant schrieb: > Thomas Heller wrote: >> Travis E. Oliphant schrieb: >> >>> You can create an array from a pointer to memory and a >>> description of the data that is much like the ctypes structure. At >>> some point, there should be a direct conversion from ctypes objects to >>> NumPy data-types, but nobody has written that yet. >>> >> >> Is there anything that should be added to ctypes (Python 2.6 isn't too >> far away) that would help with this conversion? >> > One approach would for ctypes objects to describe their data-type using > the new additions to the struct syntax (i.e. fully support the new > buffer interface including data-type description as defined in PEP > 3118). This should be going in Python 2.6 as well (but I need help in > the implementation). > > Then, NumPy arrays will support consuming the syntax as well and it > should be an easy thing to pass back and forth between NumPy array views > and ctypes-objects views of memory. I think such a thing would be very > powerful. > I assume the implementation would have to take place in the py3k branch, and later it would (hopefully) be ported to the trunk? Do I understand correctly that __array_interface__ and __array_struct__ are superseeded by what's in PEP 3118, or are these only out of scope for the PEP? Thomas From w.richert at gmx.net Tue Jan 15 14:01:48 2008 From: w.richert at gmx.net (Willi Richert) Date: Tue, 15 Jan 2008 20:01:48 +0100 Subject: [SciPy-user] Bayes net question Message-ID: <200801152001.49003.w.richert@gmx.net> Hi, I would also like to see OpenBayes being integrated into SciKits. After evaluating existing BN-Toolkits I've come to the conclusion that it is so far the only toolkit that reliably supports fully and partially observed learning via EM. I'm using it for my research and I'm so far quite happy with it. Does anybody has already got experience with pbnt? Does that already work? Regards, wr From oliphant at enthought.com Tue Jan 15 18:01:25 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 15 Jan 2008 17:01:25 -0600 Subject: [SciPy-user] numpy array in ctype struct In-Reply-To: References: <20080115070726.A460060@jazz.ncnr.nist.gov> <478CCA3A.5060505@enthought.com> <478CF3DC.2030407@enthought.com> Message-ID: <478D3B45.2030709@enthought.com> Thomas Heller wrote: > Travis E. Oliphant schrieb: > >> Thomas Heller wrote: >> >>> Travis E. Oliphant schrieb: >>> >>> >>>> You can create an array from a pointer to memory and a >>>> description of the data that is much like the ctypes structure. At >>>> some point, there should be a direct conversion from ctypes objects to >>>> NumPy data-types, but nobody has written that yet. >>>> >>>> >>> Is there anything that should be added to ctypes (Python 2.6 isn't too >>> far away) that would help with this conversion? >>> >>> >> One approach would for ctypes objects to describe their data-type using >> the new additions to the struct syntax (i.e. fully support the new >> buffer interface including data-type description as defined in PEP >> 3118). This should be going in Python 2.6 as well (but I need help in >> the implementation). >> >> Then, NumPy arrays will support consuming the syntax as well and it >> should be an easy thing to pass back and forth between NumPy array views >> and ctypes-objects views of memory. I think such a thing would be very >> powerful. >> >> > > I assume the implementation would have to take place in the py3k branch, > and later it would (hopefully) be ported to the trunk? > > Do I understand correctly that __array_interface__ and __array_struct__ > are superseeded by what's in PEP 3118, or are these only out of scope for the PEP? > > They are superseded. We will have them for a while in the NumPy world, but PEP 3118 is the right concept going forward as it is in the Python Type Object. -Travis From listservs at mac.com Tue Jan 15 20:39:31 2008 From: listservs at mac.com (Chris) Date: Wed, 16 Jan 2008 01:39:31 +0000 (UTC) Subject: [SciPy-user] BLAS/LAPACK binaries for windows build Message-ID: Does anyone know of any available (newish) BLAS and LAPACK binaries with which to make builds for windows? I am having a heck of a time building them myself. Thanks in advance. From david at ar.media.kyoto-u.ac.jp Wed Jan 16 02:33:52 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 16 Jan 2008 16:33:52 +0900 Subject: [SciPy-user] BLAS/LAPACK binaries for windows build In-Reply-To: References: Message-ID: <478DB360.40207@ar.media.kyoto-u.ac.jp> Chris wrote: > Does anyone know of any available (newish) BLAS and LAPACK > binaries with which to make builds for windows? I am having a > heck of a time building them myself. > Do you mean the ones from NETLIB ? I have sconsified version of them (that is I wrote scons scripts instead of the makefiles; scons works with MS compilers and on windows, wo the cygwin pain). I planned on cleaning them before publishing it, but I can send them to you if you want them as it is, cheers, David From david at ar.media.kyoto-u.ac.jp Wed Jan 16 02:35:01 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 16 Jan 2008 16:35:01 +0900 Subject: [SciPy-user] Bayes net question In-Reply-To: <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <478BF5E8.9060405@ucsf.edu> <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> Message-ID: <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> David Warde-Farley wrote: > On 14-Jan-08, at 6:53 PM, Karl Young wrote: > > >> From the little I've seen of the package it looks to me like it would >> be worth turing into a scikit and I would be willing to contribute >> some >> time to such an effort. But it would be nice to get input from any >> Bayes >> net experts and/or anyone who has used this package re. what they >> think >> of the code and the approach taken. >> > > On a superficial reading it looks like it might be the beginning of > something good, though it looks as though it's been abandoned in its > fairly early stages -- the restriction to discrete variables limits > it's usefulness in many areas. (Not to criticize Elliot's work, a > general purpose Bayes net toolbox is a lofty goal and he seems to > have made significant progress). > > I've heard much praise for Kevin Murphy's toolbox for Matlab ( http:// > www.cs.ubc.ca/~murphyk/Software/BNT/bnt.html ), and this might be > helpful in extending PBNT or starting another Bayes net Python > library. Since it is released under the LGPL, any straightforward > port of Kevin's code to Python would likely need to live in scigpl. > > I actually asked K. Murphy if he would agree on dual licensing BNT and co under the BSD, for porting to scipy/scikits, and am waiting for his answer on this, David From matthieu.brucher at gmail.com Wed Jan 16 03:09:57 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 16 Jan 2008 09:09:57 +0100 Subject: [SciPy-user] Bayes net question In-Reply-To: <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <478BF5E8.9060405@ucsf.edu> <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> Message-ID: > > > I've heard much praise for Kevin Murphy's toolbox for Matlab ( http:// > > www.cs.ubc.ca/~murphyk/Software/BNT/bnt.html ), and this might be > > helpful in extending PBNT or starting another Bayes net Python > > library. Since it is released under the LGPL, any straightforward > > port of Kevin's code to Python would likely need to live in scigpl. > > > > > I actually asked K. Murphy if he would agree on dual licensing BNT and > co under the BSD, for porting to scipy/scikits, and am waiting for his > answer on this, This is excellent news :) BTW, the LGPL license should be OK for an inclusion in scikits, shouldn't it ? You're planning to add it to scikits.learn ? Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed Jan 16 03:10:55 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 16 Jan 2008 17:10:55 +0900 Subject: [SciPy-user] Bayes net question In-Reply-To: References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <478BF5E8.9060405@ucsf.edu> <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> Message-ID: <478DBC0F.1090609@ar.media.kyoto-u.ac.jp> Matthieu Brucher wrote: > > > I've heard much praise for Kevin Murphy's toolbox for Matlab ( > http:// > > www.cs.ubc.ca/~murphyk/Software/BNT/bnt.html > ), and > this might be > > helpful in extending PBNT or starting another Bayes net Python > > library. Since it is released under the LGPL, any straightforward > > port of Kevin's code to Python would likely need to live in scigpl. > > > > > I actually asked K. Murphy if he would agree on dual licensing BNT and > co under the BSD, for porting to scipy/scikits, and am waiting for > his > answer on this, > > > This is excellent news :) Well, nothing is done yet, so I am not sure if this can be called news yet :) > BTW, the LGPL license should be OK for an inclusion in scikits, > shouldn't it ? I don't know if this has changed with the whole license discussion from a few days ago, but originally, any open source license was acceptable for scikits, which is the convention I am still following. Still, if possible, it is better to have BSD license if possible, because it makes it possible to include code from scikits into scipy. > You're planning to add it to scikits.learn ? I have not thought about it. I am still not satisfied with the current situation of the scikits, to be frank, but I am not sure I have any solution to it either. Hopefully, we will be able to discuss about that in the next few months (there is a plan to do a numpy/scipy sprint, + some work on ML-related code after ICASSP) cheers, David From hetland at tamu.edu Wed Jan 16 05:38:02 2008 From: hetland at tamu.edu (Rob Hetland) Date: Wed, 16 Jan 2008 11:38:02 +0100 Subject: [SciPy-user] Memory leak in delaunay interpolator Message-ID: I'm not sure who else uses the delaunay package (was in scipy.sandbox, now lives in scikits), but I find it indispensable. Today I found what appears to be a memory leak in the interpolator and extrapolator objects. This simple code demonstrates the leak: from scipy.sandbox import delaunay # or wherever your delaunay package lives these days from numpy.random import rand xi, yi = rand(2, 1000) x, y = rand(2, 100) tri = delaunay.Triangulation(xi, yi) for n in range(100000): interp = tri.nn_interpolator(rand(1000)) z = interp(x, y) I tested this code on Mac OS X 10.4, and a recent version of Ubuntu. Both show memory usage increasing consistently through the run. Also, while I am here, does anybody have any other advice on 2D interpolation? 3D? This is something I need to do often, and I am still waiting for the perfect solution to come along. -Rob ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From robince at gmail.com Wed Jan 16 06:24:36 2008 From: robince at gmail.com (Robin) Date: Wed, 16 Jan 2008 11:24:36 +0000 Subject: [SciPy-user] Test failures with latest SVN Message-ID: Hi, I am using Mac OS X 10.5.1 and recently updated to the latest svn version. (an older version was already working ok on leopard). I installed nose by hand. However I get a lot of errors in the tests. I have put the output here because it is quite long: http://pastebin.com/m58c1f6af Also at the end of the test there is a graphical window with some random text in it, so I have to press ctrl-c to get out of the python session. I am not sure if this a mac issue or something wrong on my system... Robin From matthew.brett at gmail.com Wed Jan 16 07:05:33 2008 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 16 Jan 2008 04:05:33 -0800 Subject: [SciPy-user] Test failures with latest SVN In-Reply-To: References: Message-ID: <1e2af89e0801160405o5b39caf8o34f0fbc6e04c7244@mail.gmail.com> Hi, > I am using Mac OS X 10.5.1 and recently updated to the latest svn > version. (an older version was already working ok on leopard). > I installed nose by hand. However I get a lot of errors in the tests. You certainly do get a lot of errors. Some of them are errors newly exposed by the nose tests, the hope being that we can use the errors to motivate us to fix the code. For example, there are a lot of weave errors that we need to clear up, from tests that may not have been run before. But, I noticed several errors that look as if you have a mix of old and new scipy code - for example, the code tries to import datasource, and the current SVN tree contains no reference to datasource. Could you do a clean install, after deleting the scipy installation directory and build directory, and get back to us? Thanks, Matthew From robince at gmail.com Wed Jan 16 07:31:38 2008 From: robince at gmail.com (Robin) Date: Wed, 16 Jan 2008 12:31:38 +0000 Subject: [SciPy-user] Test failures with latest SVN In-Reply-To: <1e2af89e0801160405o5b39caf8o34f0fbc6e04c7244@mail.gmail.com> References: <1e2af89e0801160405o5b39caf8o34f0fbc6e04c7244@mail.gmail.com> Message-ID: I deleted the scipy directory from site-packages and the build directory and built again. The datasource error seems to have gone but I still have 23 errors and now 1 failure: http://pastebin.com/m38524f0e Robin From schut at sarvision.nl Wed Jan 16 09:23:02 2008 From: schut at sarvision.nl (Vincent Schut) Date: Wed, 16 Jan 2008 15:23:02 +0100 Subject: [SciPy-user] best way to interpolate NAN's in 3d array Message-ID: Hi guru's, Let's say I have a (25,50,50) array, with approximately half of the items filled, half of them NAN due to missing input data. Could someone suggest a good way to interpolate/fill these NAN's? What I'm doing currently is using scipy.ndimage.generic_filter with a averaging 3x3x3 filter, running it recursively till all NAN's are gone. This worked fine for my prototyping arrays of 10x10x10, with 25x50x50 however things are kind of slow. As you can see, the accuracy of the interpolation is not very important (my implementation has lots of drawbacks compared to 'real' interpolation). Issue is I want to use the result to use ndimage.map_coordinates on, which will always return NAN whenever a NAN was close to the coordinates mapped to... So the main issue is to get rid of those NAN's, and fill them with more or less the average/interpolation of the surrounding known values. Or get a NAN- or maskedarray sensitive replacement for map_coordinates :-) Regards, Vincent Schut. From wnbell at gmail.com Wed Jan 16 10:08:34 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 16 Jan 2008 09:08:34 -0600 Subject: [SciPy-user] best way to interpolate NAN's in 3d array In-Reply-To: References: Message-ID: On Jan 16, 2008 8:23 AM, Vincent Schut wrote: > Hi guru's, > > Let's say I have a (25,50,50) array, with approximately half of the > items filled, half of them NAN due to missing input data. Could someone > suggest a good way to interpolate/fill these NAN's? > > What I'm doing currently is using scipy.ndimage.generic_filter with a > averaging 3x3x3 filter, running it recursively till all NAN's are gone. > This worked fine for my prototyping arrays of 10x10x10, with 25x50x50 > however things are kind of slow. > > As you can see, the accuracy of the interpolation is not very important > (my implementation has lots of drawbacks compared to 'real' > interpolation). Issue is I want to use the result to use > ndimage.map_coordinates on, which will always return NAN whenever a NAN > was close to the coordinates mapped to... So the main issue is to get > rid of those NAN's, and fill them with more or less the > average/interpolation of the surrounding known values. Or get a NAN- or > maskedarray sensitive replacement for map_coordinates :-) I suppose you could solve a discrete Poisson problem with the NaN values as unknowns and the non-NaN values as Dirichlet boundary conditions. The solution will be very smooth and similar to your averaging approach. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From Karl.Young at ucsf.edu Wed Jan 16 12:05:29 2008 From: Karl.Young at ucsf.edu (Karl Young) Date: Wed, 16 Jan 2008 09:05:29 -0800 Subject: [SciPy-user] Bayes net question In-Reply-To: <478DBC0F.1090609@ar.media.kyoto-u.ac.jp> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <478BF5E8.9060405@ucsf.edu> <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> <478DBC0F.1090609@ar.media.kyoto-u.ac.jp> Message-ID: <478E3959.1090002@ucsf.edu> David Cournapeau wrote: >Matthieu Brucher wrote: > > >> > I've heard much praise for Kevin Murphy's toolbox for Matlab ( >> http:// >> > www.cs.ubc.ca/~murphyk/Software/BNT/bnt.html >> ), and >> this might be >> > helpful in extending PBNT or starting another Bayes net Python >> > library. Since it is released under the LGPL, any straightforward >> > port of Kevin's code to Python would likely need to live in scigpl. >> > >> > >> I actually asked K. Murphy if he would agree on dual licensing BNT and >> co under the BSD, for porting to scipy/scikits, and am waiting for >> his >> answer on this, >> >> >>This is excellent news :) >> >> >Well, nothing is done yet, so I am not sure if this can be called news >yet :) > > I've looked at the documentation for that package (but refuse to go back to Matlab re. actually using it :-)) and it looks like it's quite a bit further along than OpenBayes re. options so does it seem like a consensus that we should wait for this to be settled before any decisions about incorporating Bayes nets ? As I mentioned before I'm willing to help (e.g. with a port) once a decision is reached re. the best approach. -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From dwf at cs.toronto.edu Wed Jan 16 12:36:48 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 16 Jan 2008 12:36:48 -0500 Subject: [SciPy-user] Bayes net question In-Reply-To: <478E3959.1090002@ucsf.edu> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <478BF5E8.9060405@ucsf.edu> <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> <478DBC0F.1090609@ar.media.kyoto-u.ac.jp> <478E3959.1090002@ucsf.edu> Message-ID: <250C3DFC-4A8D-402D-A952-E867FC57DE0F@cs.toronto.edu> On 16-Jan-08, at 12:05 PM, Karl Young wrote: > I've looked at the documentation for that package (but refuse to go > back > to Matlab re. actually using it :-)) and it looks like it's quite a > bit > further along than OpenBayes re. options so does it seem like a > consensus that we should wait for this to be settled before any > decisions about incorporating Bayes nets ? As I mentioned before I'm > willing to help (e.g. with a port) once a decision is reached re. the > best approach. Likewise, I'd be willing to help with a port. I know that BNT includes code from other projects (for one thing the IRLS implementation for optimizing the weights of a softmax from the NetLab neural networks toolbox), so it may not be totally up to Professor Murphy whether or not to dual-license the entire thing. DWF From josemaria.alkala at gmail.com Wed Jan 16 13:20:44 2008 From: josemaria.alkala at gmail.com (=?ISO-8859-1?Q?Jos=E9_Mar=EDa_Garc=EDa_P=E9rez?=) Date: Wed, 16 Jan 2008 19:20:44 +0100 Subject: [SciPy-user] 3D (I'm not talking about rendering) Message-ID: Hello everybody, I use scipy to work with bigs datasets. For example, 300000 points. I want to do geometrical tasks with those points or lines. For example, I want to look all the points with the same geometrical position or with similar geometrical position. Althought it's possible to do that just using scipy, I'm looking for a package to do those tasks much more faster that what I get now. Besides I would like to do more geometrical things, like projecting or intersecting lines, ... Thanks a lot in advance, Jose M. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Karl.Young at ucsf.edu Wed Jan 16 13:14:33 2008 From: Karl.Young at ucsf.edu (Karl Young) Date: Wed, 16 Jan 2008 10:14:33 -0800 Subject: [SciPy-user] Bayes net question In-Reply-To: <250C3DFC-4A8D-402D-A952-E867FC57DE0F@cs.toronto.edu> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <478BF5E8.9060405@ucsf.edu> <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> <478DBC0F.1090609@ar.media.kyoto-u.ac.jp> <478E3959.1090002@ucsf.edu> <250C3DFC-4A8D-402D-A952-E867FC57DE0F@cs.toronto.edu> Message-ID: <478E4989.4080100@ucsf.edu> David Warde-Farley wrote: >On 16-Jan-08, at 12:05 PM, Karl Young wrote: > > > >>I've looked at the documentation for that package (but refuse to go >>back >>to Matlab re. actually using it :-)) and it looks like it's quite a >>bit >>further along than OpenBayes re. options so does it seem like a >>consensus that we should wait for this to be settled before any >>decisions about incorporating Bayes nets ? As I mentioned before I'm >>willing to help (e.g. with a port) once a decision is reached re. the >>best approach. >> >> > >Likewise, I'd be willing to help with a port. > >I know that BNT includes code from other projects (for one thing the >IRLS implementation for optimizing the weights of a softmax from the >NetLab neural networks toolbox), so it may not be totally up to >Professor Murphy whether or not to dual-license the entire thing. > > Good point though I guess we could study which of that functionality could be replaced by existing Scipy bits and which might be reasonable to rewrite (particularly if there are more than one of us working on this), e.g. we might not need everything that's in the neural networks toolbox. -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From nwagner at iam.uni-stuttgart.de Wed Jan 16 14:17:08 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 16 Jan 2008 20:17:08 +0100 Subject: [SciPy-user] Test failures with latest SVN In-Reply-To: References: Message-ID: On Wed, 16 Jan 2008 11:24:36 +0000 Robin wrote: > Hi, > > I am using Mac OS X 10.5.1 and recently updated to the >latest svn > version. (an older version was already working ok on >leopard). > I installed nose by hand. However I get a lot of errors >in the tests. > > I have put the output here because it is quite long: > http://pastebin.com/m58c1f6af > Also at the end of the test there is a graphical window >with some > random text in it, so I have to press ctrl-c to get out >of the python > session. Did you activate sandbox.lobpcg ? A small test compares the spectrum computed by symeig and lobpcg. The spectra are depicted in a plot. > > I am not sure if this a mac issue or something wrong on >my system... > > Robin > Hi Robin, I have filed four tickets wrt recent test failures/errors. http://projects.scipy.org/scipy/scipy/ticket/588 http://projects.scipy.org/scipy/scipy/ticket/587 http://projects.scipy.org/scipy/scipy/ticket/586 http://projects.scipy.org/scipy/scipy/ticket/584 BTW, there is another typo in the test output. It should be Hilbert transform of periodic functions instead of Tilbert transform of periodic functions Cheers, Nils ====================================================================== ERROR: Test matrices giving some Nan generalized eigen values. ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 156, in test_falker if all(isfinite(res[:, i])): NameError: global name 'all' is not defined ====================================================================== ERROR: Test singular pair ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 132, in test_singular if all(isfinite(res[:, i])): NameError: global name 'all' is not defined ====================================================================== ERROR: Failure: ImportError (cannot import name _bspline) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/nose/loader.py", line 363, in loadTestsFromName module = self.importer.importFromPath( File "/usr/lib/python2.4/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.4/site-packages/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib/python2.4/site-packages/scipy/stats/models/tests/test_bspline.py", line 9, in ? import scipy.stats.models.bspline as B File "/usr/lib/python2.4/site-packages/scipy/stats/models/bspline.py", line 23, in ? from scipy.stats.models import _bspline ImportError: cannot import name _bspline ====================================================================== ERROR: test_huber (test_scale.TestScale) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/stats/models/tests/test_scale.py", line 35, in test_huber m = scale.huber(X) File "/usr/lib/python2.4/site-packages/scipy/stats/models/robust/scale.py", line 82, in __call__ for donothing in self: File "/usr/lib/python2.4/site-packages/scipy/stats/models/robust/scale.py", line 102, in next scale = N.sum(subset * (a - mu)**2, axis=self.axis) / (self.n * Huber.gamma - N.sum(1. - subset, axis=self.axis) * Huber.c**2) File "/usr/lib/python2.4/site-packages/numpy/core/fromnumeric.py", line 866, in sum return sum(axis, dtype, out) TypeError: only length-1 arrays can be converted to Python scalars ====================================================================== ERROR: test_huberaxes (test_scale.TestScale) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/stats/models/tests/test_scale.py", line 40, in test_huberaxes m = scale.huber(X, axis=0) File "/usr/lib/python2.4/site-packages/scipy/stats/models/robust/scale.py", line 82, in __call__ for donothing in self: File "/usr/lib/python2.4/site-packages/scipy/stats/models/robust/scale.py", line 102, in next scale = N.sum(subset * (a - mu)**2, axis=self.axis) / (self.n * Huber.gamma - N.sum(1. - subset, axis=self.axis) * Huber.c**2) File "/usr/lib/python2.4/site-packages/numpy/core/fromnumeric.py", line 866, in sum return sum(axis, dtype, out) TypeError: only length-1 arrays can be converted to Python scalars ====================================================================== ERROR: Failure: ImportError (No module named convolve) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/nose/loader.py", line 363, in loadTestsFromName module = self.importer.importFromPath( File "/usr/lib/python2.4/site-packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.4/site-packages/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib/python2.4/site-packages/scipy/stsci/image/__init__.py", line 2, in ? from _image import * File "/usr/lib/python2.4/site-packages/scipy/stsci/image/_image.py", line 2, in ? import convolve ImportError: No module named convolve ====================================================================== ERROR: test_access_set_speed (test_scxx_sequence._TestSequenceBase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_sequence.py", line 162, in test_access_set_speed a = self.seq_type([0]) * N TypeError: 'NoneType' object is not callable ====================================================================== ERROR: test_access_speed (test_scxx_sequence._TestSequenceBase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_sequence.py", line 131, in test_access_speed a = self.seq_type([0]) * N TypeError: 'NoneType' object is not callable ====================================================================== ERROR: test_conversion (test_scxx_sequence._TestSequenceBase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_sequence.py", line 28, in test_conversion a = self.seq_type([1]) TypeError: 'NoneType' object is not callable ====================================================================== ERROR: Test the "count" method for lists. We'll assume ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_sequence.py", line 97, in test_count a = self.seq_type([1,2,'alpha',3.1416]) TypeError: 'NoneType' object is not callable ====================================================================== ERROR: Test the "in" method for lists. We'll assume ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_sequence.py", line 44, in test_in a = self.seq_type([1,2,'alpha',3.1416]) TypeError: 'NoneType' object is not callable ====================================================================== ERROR: no_test_no_check_return (test_wx_spec.TestWxConverter) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_wx_spec.py", line 104, in no_test_no_check_return mod.compile() File "/usr/lib/python2.4/site-packages/scipy/weave/ext_tools.py", line 365, in compile verbose = verbose, **kw) File "/usr/lib/python2.4/site-packages/scipy/weave/build_tools.py", line 271, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "/usr/lib/python2.4/site-packages/numpy/distutils/core.py", line 178, in setup return old_setup(**new_attr) File "/usr/lib/python2.4/distutils/core.py", line 166, in setup raise SystemExit, "error: " + str(msg) CompileError: error: Command "g++ -pthread -fno-strict-aliasing -DNDEBUG -O2 -march=i586 -mcpu=i686 -fmessage-length=0 -g -fPIC -I/usr/lib/python2.4/site-packages/scipy/weave -I/usr/lib/python2.4/site-packages/scipy/weave/scxx -I/usr/lib/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c /home/nwagner/wx_return.cpp -o /tmp/nwagner/python24_intermediate/compiler_dc4fe17de7765fced0d869942e4ccabc/home/nwagner/wx_return.o" failed with exit status 1 ====================================================================== ERROR: test_var_in (test_wx_spec.TestWxConverter) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_wx_spec.py", line 63, in test_var_in mod.compile() File "/usr/lib/python2.4/site-packages/scipy/weave/ext_tools.py", line 365, in compile verbose = verbose, **kw) File "/usr/lib/python2.4/site-packages/scipy/weave/build_tools.py", line 271, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "/usr/lib/python2.4/site-packages/numpy/distutils/core.py", line 178, in setup return old_setup(**new_attr) File "/usr/lib/python2.4/distutils/core.py", line 166, in setup raise SystemExit, "error: " + str(msg) CompileError: error: Command "g++ -pthread -fno-strict-aliasing -DNDEBUG -O2 -march=i586 -mcpu=i686 -fmessage-length=0 -g -fPIC -I/usr/lib/python2.4/site-packages/scipy/weave -I/usr/lib/python2.4/site-packages/scipy/weave/scxx -I/usr/include/gtk-1.2 -I/usr/include/glib-1.2 -I/usr/lib/glib/include -I/usr/lib/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c /home/nwagner/wx_var_in.cpp -o /tmp/nwagner/python24_intermediate/compiler_dc4fe17de7765fced0d869942e4ccabc/home/nwagner/wx_var_in.o -I/usr/lib/wx/include/gtk2-unicode-release-2.5 -I/usr/include/wx-2.5 -DGTK_NO_CHECK_CASTS -D__WXGTK__ -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES -I/usr/lib/wx/include/gtk2-unicode-release-2.5 -I/usr/include/wx-2.5 -DGTK_NO_CHECK_CASTS -D__WXGTK__ -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES" failed with exit status 1 ====================================================================== FAIL: test_imresize (test_pilutil.TestPILUtil) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/misc/tests/test_pilutil.py", line 24, in test_imresize assert_equal(im1.shape,(11,22)) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 137, in assert_equal assert_equal(len(actual),len(desired),err_msg,verbose) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 145, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: ACTUAL: 0 DESIRED: 2 ====================================================================== FAIL: Test generator for parametric tests ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/nose/case.py", line 203, in runTest self.test(*self.arg) File "/usr/lib/python2.4/site-packages/scipy/misc/tests/test_pilutil.py", line 36, in tst_fromimage assert img.min() >= imin AssertionError ====================================================================== FAIL: Test generator for parametric tests ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/nose/case.py", line 203, in runTest self.test(*self.arg) File "/usr/lib/python2.4/site-packages/scipy/misc/tests/test_pilutil.py", line 36, in tst_fromimage assert img.min() >= imin AssertionError ====================================================================== FAIL: Test generator for parametric tests ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/nose/case.py", line 203, in runTest self.test(*self.arg) File "/usr/lib/python2.4/site-packages/scipy/misc/tests/test_pilutil.py", line 36, in tst_fromimage assert img.min() >= imin AssertionError ====================================================================== FAIL: test_DAD (test_sa.TestSASolverPerformance) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/sandbox/multigrid/tests/test_sa.py", line 234, in test_DAD assert(avg_convergence_ratio < 0.2) AssertionError ====================================================================== FAIL: test_char_fail (test_scxx_dict.TestDictGetItemOp) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", line 120, in test_char_fail self.generic_get('return_val = a["c"];') File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", line 108, in generic_get assert res == a['b'] AssertionError ====================================================================== FAIL: test_obj_fail (test_scxx_dict.TestDictGetItemOp) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", line 147, in test_obj_fail self.generic_get(code,['a']) File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", line 108, in generic_get assert res == a['b'] AssertionError ====================================================================== FAIL: test_set_complex (test_scxx_object.TestObjectSetItemOpKey) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_object.py", line 865, in test_set_complex assert_equal(sys.getrefcount(key),4) # should be 3 File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 145, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: ACTUAL: 3 DESIRED: 4 ====================================================================== FAIL: test_char_fail (test_scxx_dict.TestDictGetItemOp) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", line 120, in test_char_fail self.generic_get('return_val = a["c"];') File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", line 108, in generic_get assert res == a['b'] AssertionError ====================================================================== FAIL: test_obj_fail (test_scxx_dict.TestDictGetItemOp) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", line 147, in test_obj_fail self.generic_get(code,['a']) File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", line 108, in generic_get assert res == a['b'] AssertionError ====================================================================== FAIL: test_set_complex (test_scxx_object.TestObjectSetItemOpKey) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_object.py", line 865, in test_set_complex assert_equal(sys.getrefcount(key),4) # should be 3 File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 145, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: ACTUAL: 3 DESIRED: 4 ====================================================================== FAIL: test_char_fail (test_scxx_dict.TestDictGetItemOp) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", line 120, in test_char_fail self.generic_get('return_val = a["c"];') File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", line 108, in generic_get assert res == a['b'] AssertionError ====================================================================== FAIL: test_obj_fail (test_scxx_dict.TestDictGetItemOp) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", line 147, in test_obj_fail self.generic_get(code,['a']) File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", line 108, in generic_get assert res == a['b'] AssertionError ====================================================================== FAIL: test_set_complex (test_scxx_object.TestObjectSetItemOpKey) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_object.py", line 865, in test_set_complex assert_equal(sys.getrefcount(key),4) # should be 3 File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 145, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: ACTUAL: 3 DESIRED: 4 ---------------------------------------------------------------------- Ran 2622 tests in 731.459s FAILED (failures=14, errors=13) From nwagner at iam.uni-stuttgart.de Wed Jan 16 14:40:33 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 16 Jan 2008 20:40:33 +0100 Subject: [SciPy-user] Memory leak in delaunay interpolator In-Reply-To: References: Message-ID: On Wed, 16 Jan 2008 11:38:02 +0100 Rob Hetland wrote: > > I'm not sure who else uses the delaunay package (was in > > scipy.sandbox, now lives in scikits), but I find it >indispensable. > Today I found what appears to be a memory leak in the >interpolator > and extrapolator objects. This simple code demonstrates >the leak: > > > > from scipy.sandbox import delaunay # or wherever your >delaunay > package lives these days > from numpy.random import rand > > xi, yi = rand(2, 1000) > x, y = rand(2, 100) > tri = delaunay.Triangulation(xi, yi) > > for n in range(100000): > interp = tri.nn_interpolator(rand(1000)) > z = interp(x, y) > > > > I tested this code on Mac OS X 10.4, and a recent >version of Ubuntu. > Both show memory usage increasing consistently through >the run. > > Also, while I am here, does anybody have any other >advice on 2D > interpolation? 3D? This is something I need to do >often, and I am > still waiting for the perfect solution to come along. > > -Rob > > ---- > Rob Hetland, Associate Professor > Dept. of Oceanography, Texas A&M University > http://pong.tamu.edu/~rob > phone: 979-458-0096, fax: 979-845-6331 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Hi Rob, I have installed delaunay from scratch. Your code produces a segfault here. I will file a ticket. from scikits import delaunay # or wherever your delaunay package lives these days from numpy.random import rand xi, yi = rand(2, 1000) x, y = rand(2, 100) tri = delaunay.Triangulation(xi, yi) for n in range(100000): interp = tri.nn_interpolator(rand(1000)) z = interp(x, y) (gdb) run hetland.py Starting program: /usr/bin/python hetland.py [Thread debugging using libthread_db enabled] [New Thread 1077176000 (LWP 16333)] Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 1077176000 (LWP 16333)] 0x4045cfb0 in nn_interpolate_unstructured_method (self=0x0, args=0x1) at _delaunay.cpp:504 504 CLEANUP (gdb) bt #0 0x4045cfb0 in nn_interpolate_unstructured_method (self=0x0, args=0x1) at _delaunay.cpp:504 #1 0x4007b583 in PyCFunction_Call () from /usr/lib/libpython2.4.so.1.0 #2 0x400b2a91 in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 #3 0x400b4bc1 in PyEval_EvalCodeEx () from /usr/lib/libpython2.4.so.1.0 #4 0x4006b13a in function_call () from /usr/lib/libpython2.4.so.1.0 #5 0x40053c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #6 0x4005cedb in instancemethod_call () from /usr/lib/libpython2.4.so.1.0 #7 0x40053c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #8 0x4008ec2c in slot_tp_call () from /usr/lib/libpython2.4.so.1.0 #9 0x40053c37 in PyObject_Call () from /usr/lib/libpython2.4.so.1.0 #10 0x400b1f37 in PyEval_EvalFrame () from /usr/lib/libpython2.4.so.1.0 #11 0x400b4bc1 in PyEval_EvalCodeEx () from /usr/lib/libpython2.4.so.1.0 #12 0x400b4e95 in PyEval_EvalCode () from /usr/lib/libpython2.4.so.1.0 #13 0x400cf618 in run_node () from /usr/lib/libpython2.4.so.1.0 #14 0x400d0db3 in PyRun_SimpleFileExFlags () from /usr/lib/libpython2.4.so.1.0 #15 0x400d137a in PyRun_AnyFileExFlags () from /usr/lib/libpython2.4.so.1.0 #16 0x400d750a in Py_Main () from /usr/lib/libpython2.4.so.1.0 #17 0x0804871a in main (argc=1, argv=0x1) at ccpython.cc:10 Cheers, Nils From pav at iki.fi Wed Jan 16 16:52:45 2008 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 16 Jan 2008 21:52:45 +0000 (UTC) Subject: [SciPy-user] Memory leak in delaunay interpolator References: Message-ID: Wed, 16 Jan 2008 20:40:33 +0100, Nils Wagner wrote: > On Wed, 16 Jan 2008 11:38:02 +0100 > Rob Hetland wrote: >> >> I'm not sure who else uses the delaunay package (was in >> >> scipy.sandbox, now lives in scikits), but I find it >> indispensable. Today I found what appears to be a memory leak in the >> interpolator and extrapolator objects. This simple code demonstrates >> the leak: This may be the same as bug #382 (patch is available in the ticket). Also #376 may be relevant for the observed segfault: there are (or were?) known crasher bugs in the delaunay module. -- Pauli Virtanen From yennifersantiago at gmail.com Wed Jan 16 17:39:27 2008 From: yennifersantiago at gmail.com (Yennifer Santiago) Date: Wed, 16 Jan 2008 18:39:27 -0400 Subject: [SciPy-user] SciPy-user Digest, Vol 53, Issue 28 In-Reply-To: References: Message-ID: <41bc705b0801161439k6ac5ce64m153842367d49c999@mail.gmail.com> Hello... I need to Know where I can obtain information about the operation of the Genetic Algorithms in SciPy. I'm looking for some tutorial about it. Yennifer Santiago On 1/16/08, scipy-user-request at scipy.org wrote: > Send SciPy-user mailing list submissions to > scipy-user at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://projects.scipy.org/mailman/listinfo/scipy-user > or, via email, send a message with subject or body 'help' to > scipy-user-request at scipy.org > > You can reach the person managing the list at > scipy-user-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-user digest..." > > > Today's Topics: > > 1. 3D (I'm not talking about rendering) ( Jos? Mar?a Garc?a P?rez ) > 2. Re: Bayes net question (Karl Young) > 3. Re: Test failures with latest SVN (Nils Wagner) > 4. Re: Memory leak in delaunay interpolator (Nils Wagner) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 16 Jan 2008 19:20:44 +0100 > From: " Jos? Mar?a Garc?a P?rez " > Subject: [SciPy-user] 3D (I'm not talking about rendering) > To: scipy-user at scipy.org > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > Hello everybody, > I use scipy to work with bigs datasets. For example, 300000 points. I want > to do geometrical tasks with those points or lines. For example, I want to > look all the points with the same geometrical position or with similar > geometrical position. > > Althought it's possible to do that just using scipy, I'm looking for a > package to do those tasks much more faster that what I get now. > > Besides I would like to do more geometrical things, like projecting or > intersecting lines, ... > > Thanks a lot in advance, > Jose M. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://projects.scipy.org/pipermail/scipy-user/attachments/20080116/6d4bfd37/attachment-0001.html > > ------------------------------ > > Message: 2 > Date: Wed, 16 Jan 2008 10:14:33 -0800 > From: Karl Young > Subject: Re: [SciPy-user] Bayes net question > To: SciPy Users List > Message-ID: <478E4989.4080100 at ucsf.edu> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > David Warde-Farley wrote: > > >On 16-Jan-08, at 12:05 PM, Karl Young wrote: > > > > > > > >>I've looked at the documentation for that package (but refuse to go > >>back > >>to Matlab re. actually using it :-)) and it looks like it's quite a > >>bit > >>further along than OpenBayes re. options so does it seem like a > >>consensus that we should wait for this to be settled before any > >>decisions about incorporating Bayes nets ? As I mentioned before I'm > >>willing to help (e.g. with a port) once a decision is reached re. the > >>best approach. > >> > >> > > > >Likewise, I'd be willing to help with a port. > > > >I know that BNT includes code from other projects (for one thing the > >IRLS implementation for optimizing the weights of a softmax from the > >NetLab neural networks toolbox), so it may not be totally up to > >Professor Murphy whether or not to dual-license the entire thing. > > > > > > Good point though I guess we could study which of that functionality > could be replaced by existing Scipy bits and which might be reasonable > to rewrite (particularly if there are more than one of us working on > this), e.g. we might not need everything that's in the neural networks > toolbox. > > -- > > Karl Young > Center for Imaging of Neurodegenerative Diseases, UCSF > VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab > > 4150 Clement Street FAX: (415) 668-2864 > San Francisco, CA 94121 Email: karl young at ucsf edu > > > > ------------------------------ > > Message: 3 > Date: Wed, 16 Jan 2008 20:17:08 +0100 > From: "Nils Wagner" > Subject: Re: [SciPy-user] Test failures with latest SVN > To: SciPy Users List > Message-ID: > Content-Type: text/plain; charset="ISO-8859-1"; format="flowed" > > On Wed, 16 Jan 2008 11:24:36 +0000 > Robin wrote: > > Hi, > > > > I am using Mac OS X 10.5.1 and recently updated to the > >latest svn > > version. (an older version was already working ok on > >leopard). > > I installed nose by hand. However I get a lot of errors > >in the tests. > > > > I have put the output here because it is quite long: > > http://pastebin.com/m58c1f6af > > Also at the end of the test there is a graphical window > >with some > > random text in it, so I have to press ctrl-c to get out > >of the python > > session. > > Did you activate sandbox.lobpcg ? > A small test compares the spectrum computed by symeig > and lobpcg. The spectra are depicted in a plot. > > > > > > I am not sure if this a mac issue or something wrong on > >my system... > > > > Robin > > > Hi Robin, > > I have filed four tickets wrt recent test failures/errors. > > http://projects.scipy.org/scipy/scipy/ticket/588 > http://projects.scipy.org/scipy/scipy/ticket/587 > http://projects.scipy.org/scipy/scipy/ticket/586 > http://projects.scipy.org/scipy/scipy/ticket/584 > > BTW, there is another typo in the test output. > > > It should be > > Hilbert transform of periodic functions > > instead of > > Tilbert transform of periodic functions > > > Cheers, > > Nils > > ====================================================================== > ERROR: Test matrices giving some Nan generalized eigen > values. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", > line 156, in test_falker > if all(isfinite(res[:, i])): > NameError: global name 'all' is not defined > > ====================================================================== > ERROR: Test singular pair > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", > line 132, in test_singular > if all(isfinite(res[:, i])): > NameError: global name 'all' is not defined > > ====================================================================== > ERROR: Failure: ImportError (cannot import name _bspline) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.4/site-packages/nose/loader.py", > line 363, in loadTestsFromName > module = self.importer.importFromPath( > File > "/usr/lib/python2.4/site-packages/nose/importer.py", line > 39, in importFromPath > return self.importFromDir(dir_path, fqname) > File > "/usr/lib/python2.4/site-packages/nose/importer.py", line > 84, in importFromDir > mod = load_module(part_fqname, fh, filename, desc) > File > "/usr/lib/python2.4/site-packages/scipy/stats/models/tests/test_bspline.py", > line 9, in ? > import scipy.stats.models.bspline as B > File > "/usr/lib/python2.4/site-packages/scipy/stats/models/bspline.py", > line 23, in ? > from scipy.stats.models import _bspline > ImportError: cannot import name _bspline > > ====================================================================== > ERROR: test_huber (test_scale.TestScale) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/stats/models/tests/test_scale.py", > line 35, in test_huber > m = scale.huber(X) > File > "/usr/lib/python2.4/site-packages/scipy/stats/models/robust/scale.py", > line 82, in __call__ > for donothing in self: > File > "/usr/lib/python2.4/site-packages/scipy/stats/models/robust/scale.py", > line 102, in next > scale = N.sum(subset * (a - mu)**2, axis=self.axis) / > (self.n * Huber.gamma - N.sum(1. - subset, axis=self.axis) > * Huber.c**2) > File > "/usr/lib/python2.4/site-packages/numpy/core/fromnumeric.py", > line 866, in sum > return sum(axis, dtype, out) > TypeError: only length-1 arrays can be converted to Python > scalars > > ====================================================================== > ERROR: test_huberaxes (test_scale.TestScale) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/stats/models/tests/test_scale.py", > line 40, in test_huberaxes > m = scale.huber(X, axis=0) > File > "/usr/lib/python2.4/site-packages/scipy/stats/models/robust/scale.py", > line 82, in __call__ > for donothing in self: > File > "/usr/lib/python2.4/site-packages/scipy/stats/models/robust/scale.py", > line 102, in next > scale = N.sum(subset * (a - mu)**2, axis=self.axis) / > (self.n * Huber.gamma - N.sum(1. - subset, axis=self.axis) > * Huber.c**2) > File > "/usr/lib/python2.4/site-packages/numpy/core/fromnumeric.py", > line 866, in sum > return sum(axis, dtype, out) > TypeError: only length-1 arrays can be converted to Python > scalars > > ====================================================================== > ERROR: Failure: ImportError (No module named convolve) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.4/site-packages/nose/loader.py", > line 363, in loadTestsFromName > module = self.importer.importFromPath( > File > "/usr/lib/python2.4/site-packages/nose/importer.py", line > 39, in importFromPath > return self.importFromDir(dir_path, fqname) > File > "/usr/lib/python2.4/site-packages/nose/importer.py", line > 84, in importFromDir > mod = load_module(part_fqname, fh, filename, desc) > File > "/usr/lib/python2.4/site-packages/scipy/stsci/image/__init__.py", > line 2, in ? > from _image import * > File > "/usr/lib/python2.4/site-packages/scipy/stsci/image/_image.py", > line 2, in ? > import convolve > ImportError: No module named convolve > > ====================================================================== > ERROR: test_access_set_speed > (test_scxx_sequence._TestSequenceBase) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_sequence.py", > line 162, in test_access_set_speed > a = self.seq_type([0]) * N > TypeError: 'NoneType' object is not callable > > ====================================================================== > ERROR: test_access_speed > (test_scxx_sequence._TestSequenceBase) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_sequence.py", > line 131, in test_access_speed > a = self.seq_type([0]) * N > TypeError: 'NoneType' object is not callable > > ====================================================================== > ERROR: test_conversion > (test_scxx_sequence._TestSequenceBase) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_sequence.py", > line 28, in test_conversion > a = self.seq_type([1]) > TypeError: 'NoneType' object is not callable > > ====================================================================== > ERROR: Test the "count" method for lists. We'll assume > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_sequence.py", > line 97, in test_count > a = self.seq_type([1,2,'alpha',3.1416]) > TypeError: 'NoneType' object is not callable > > ====================================================================== > ERROR: Test the "in" method for lists. We'll assume > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_sequence.py", > line 44, in test_in > a = self.seq_type([1,2,'alpha',3.1416]) > TypeError: 'NoneType' object is not callable > > ====================================================================== > ERROR: no_test_no_check_return > (test_wx_spec.TestWxConverter) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_wx_spec.py", > line 104, in no_test_no_check_return > mod.compile() > File > "/usr/lib/python2.4/site-packages/scipy/weave/ext_tools.py", > line 365, in compile > verbose = verbose, **kw) > File > "/usr/lib/python2.4/site-packages/scipy/weave/build_tools.py", > line 271, in build_extension > setup(name = module_name, ext_modules = > [ext],verbose=verb) > File > "/usr/lib/python2.4/site-packages/numpy/distutils/core.py", > line 178, in setup > return old_setup(**new_attr) > File "/usr/lib/python2.4/distutils/core.py", line 166, > in setup > raise SystemExit, "error: " + str(msg) > CompileError: error: Command "g++ -pthread > -fno-strict-aliasing -DNDEBUG -O2 -march=i586 -mcpu=i686 > -fmessage-length=0 -g -fPIC > -I/usr/lib/python2.4/site-packages/scipy/weave > -I/usr/lib/python2.4/site-packages/scipy/weave/scxx > -I/usr/lib/python2.4/site-packages/numpy/core/include > -I/usr/include/python2.4 -c /home/nwagner/wx_return.cpp -o > /tmp/nwagner/python24_intermediate/compiler_dc4fe17de7765fced0d869942e4ccabc/home/nwagner/wx_return.o" > failed with exit status 1 > > ====================================================================== > ERROR: test_var_in (test_wx_spec.TestWxConverter) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_wx_spec.py", > line 63, in test_var_in > mod.compile() > File > "/usr/lib/python2.4/site-packages/scipy/weave/ext_tools.py", > line 365, in compile > verbose = verbose, **kw) > File > "/usr/lib/python2.4/site-packages/scipy/weave/build_tools.py", > line 271, in build_extension > setup(name = module_name, ext_modules = > [ext],verbose=verb) > File > "/usr/lib/python2.4/site-packages/numpy/distutils/core.py", > line 178, in setup > return old_setup(**new_attr) > File "/usr/lib/python2.4/distutils/core.py", line 166, > in setup > raise SystemExit, "error: " + str(msg) > CompileError: error: Command "g++ -pthread > -fno-strict-aliasing -DNDEBUG -O2 -march=i586 -mcpu=i686 > -fmessage-length=0 -g -fPIC > -I/usr/lib/python2.4/site-packages/scipy/weave > -I/usr/lib/python2.4/site-packages/scipy/weave/scxx > -I/usr/include/gtk-1.2 -I/usr/include/glib-1.2 > -I/usr/lib/glib/include > -I/usr/lib/python2.4/site-packages/numpy/core/include > -I/usr/include/python2.4 -c /home/nwagner/wx_var_in.cpp -o > /tmp/nwagner/python24_intermediate/compiler_dc4fe17de7765fced0d869942e4ccabc/home/nwagner/wx_var_in.o > -I/usr/lib/wx/include/gtk2-unicode-release-2.5 > -I/usr/include/wx-2.5 -DGTK_NO_CHECK_CASTS -D__WXGTK__ > -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES > -I/usr/lib/wx/include/gtk2-unicode-release-2.5 > -I/usr/include/wx-2.5 -DGTK_NO_CHECK_CASTS -D__WXGTK__ > -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES" failed with exit > status 1 > > ====================================================================== > FAIL: test_imresize (test_pilutil.TestPILUtil) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/misc/tests/test_pilutil.py", > line 24, in test_imresize > assert_equal(im1.shape,(11,22)) > File > "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", > line 137, in assert_equal > assert_equal(len(actual),len(desired),err_msg,verbose) > File > "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", > line 145, in assert_equal > assert desired == actual, msg > AssertionError: > Items are not equal: > ACTUAL: 0 > DESIRED: 2 > > ====================================================================== > FAIL: Test generator for parametric tests > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.4/site-packages/nose/case.py", > line 203, in runTest > self.test(*self.arg) > File > "/usr/lib/python2.4/site-packages/scipy/misc/tests/test_pilutil.py", > line 36, in tst_fromimage > assert img.min() >= imin > AssertionError > > ====================================================================== > FAIL: Test generator for parametric tests > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.4/site-packages/nose/case.py", > line 203, in runTest > self.test(*self.arg) > File > "/usr/lib/python2.4/site-packages/scipy/misc/tests/test_pilutil.py", > line 36, in tst_fromimage > assert img.min() >= imin > AssertionError > > ====================================================================== > FAIL: Test generator for parametric tests > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.4/site-packages/nose/case.py", > line 203, in runTest > self.test(*self.arg) > File > "/usr/lib/python2.4/site-packages/scipy/misc/tests/test_pilutil.py", > line 36, in tst_fromimage > assert img.min() >= imin > AssertionError > > ====================================================================== > FAIL: test_DAD (test_sa.TestSASolverPerformance) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/sandbox/multigrid/tests/test_sa.py", > line 234, in test_DAD > assert(avg_convergence_ratio < 0.2) > AssertionError > > ====================================================================== > FAIL: test_char_fail (test_scxx_dict.TestDictGetItemOp) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", > line 120, in test_char_fail > self.generic_get('return_val = a["c"];') > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", > line 108, in generic_get > assert res == a['b'] > AssertionError > > ====================================================================== > FAIL: test_obj_fail (test_scxx_dict.TestDictGetItemOp) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", > line 147, in test_obj_fail > self.generic_get(code,['a']) > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", > line 108, in generic_get > assert res == a['b'] > AssertionError > > ====================================================================== > FAIL: test_set_complex > (test_scxx_object.TestObjectSetItemOpKey) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_object.py", > line 865, in test_set_complex > assert_equal(sys.getrefcount(key),4) # should be 3 > File > "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", > line 145, in assert_equal > assert desired == actual, msg > AssertionError: > Items are not equal: > ACTUAL: 3 > DESIRED: 4 > > ====================================================================== > FAIL: test_char_fail (test_scxx_dict.TestDictGetItemOp) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", > line 120, in test_char_fail > self.generic_get('return_val = a["c"];') > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", > line 108, in generic_get > assert res == a['b'] > AssertionError > > ====================================================================== > FAIL: test_obj_fail (test_scxx_dict.TestDictGetItemOp) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", > line 147, in test_obj_fail > self.generic_get(code,['a']) > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", > line 108, in generic_get > assert res == a['b'] > AssertionError > > ====================================================================== > FAIL: test_set_complex > (test_scxx_object.TestObjectSetItemOpKey) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_object.py", > line 865, in test_set_complex > assert_equal(sys.getrefcount(key),4) # should be 3 > File > "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", > line 145, in assert_equal > assert desired == actual, msg > AssertionError: > Items are not equal: > ACTUAL: 3 > DESIRED: 4 > > ====================================================================== > FAIL: test_char_fail (test_scxx_dict.TestDictGetItemOp) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", > line 120, in test_char_fail > self.generic_get('return_val = a["c"];') > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", > line 108, in generic_get > assert res == a['b'] > AssertionError > > ====================================================================== > FAIL: test_obj_fail (test_scxx_dict.TestDictGetItemOp) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", > line 147, in test_obj_fail > self.generic_get(code,['a']) > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_dict.py", > line 108, in generic_get > assert res == a['b'] > AssertionError > > ====================================================================== > FAIL: test_set_complex > (test_scxx_object.TestObjectSetItemOpKey) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/weave/tests/test_scxx_object.py", > line 865, in test_set_complex > assert_equal(sys.getrefcount(key),4) # should be 3 > File > "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", > line 145, in assert_equal > assert desired == actual, msg > AssertionError: > Items are not equal: > ACTUAL: 3 > DESIRED: 4 > > ---------------------------------------------------------------------- > Ran 2622 tests in 731.459s > > FAILED (failures=14, errors=13) > > > ------------------------------ > > Message: 4 > Date: Wed, 16 Jan 2008 20:40:33 +0100 > From: "Nils Wagner" > Subject: Re: [SciPy-user] Memory leak in delaunay interpolator > To: SciPy Users List > Message-ID: > Content-Type: text/plain; charset="ISO-8859-1"; format="flowed" > > On Wed, 16 Jan 2008 11:38:02 +0100 > Rob Hetland wrote: > > > > I'm not sure who else uses the delaunay package (was in > > > > scipy.sandbox, now lives in scikits), but I find it > >indispensable. > > Today I found what appears to be a memory leak in the > >interpolator > > and extrapolator objects. This simple code demonstrates > >the leak: > > > > > > > > from scipy.sandbox import delaunay # or wherever your > >delaunay > > package lives these days > > from numpy.random import rand > > > > xi, yi = rand(2, 1000) > > x, y = rand(2, 100) > > tri = delaunay.Triangulation(xi, yi) > > > > for n in range(100000): > > interp = tri.nn_interpolator(rand(1000)) > > z = interp(x, y) > > > > > > > > I tested this code on Mac OS X 10.4, and a recent > >version of Ubuntu. > > Both show memory usage increasing consistently through > >the run. > > > > Also, while I am here, does anybody have any other > >advice on 2D > > interpolation? 3D? This is something I need to do > >often, and I am > > still waiting for the perfect solution to come along. > > > > -Rob > > > > ---- > > Rob Hetland, Associate Professor > > Dept. of Oceanography, Texas A&M University > > http://pong.tamu.edu/~rob > > phone: 979-458-0096, fax: 979-845-6331 > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > Hi Rob, > > I have installed delaunay from scratch. > Your code produces a segfault here. I will file a ticket. > > from scikits import delaunay # or wherever your delaunay > package lives these days > from numpy.random import rand > > xi, yi = rand(2, 1000) > x, y = rand(2, 100) > tri = delaunay.Triangulation(xi, yi) > > for n in range(100000): > interp = tri.nn_interpolator(rand(1000)) > z = interp(x, y) > > (gdb) run hetland.py > Starting program: /usr/bin/python hetland.py > [Thread debugging using libthread_db enabled] > [New Thread 1077176000 (LWP 16333)] > > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread 1077176000 (LWP 16333)] > 0x4045cfb0 in nn_interpolate_unstructured_method > (self=0x0, args=0x1) at _delaunay.cpp:504 > 504 CLEANUP > (gdb) bt > #0 0x4045cfb0 in nn_interpolate_unstructured_method > (self=0x0, args=0x1) at _delaunay.cpp:504 > #1 0x4007b583 in PyCFunction_Call () from > /usr/lib/libpython2.4.so.1.0 > #2 0x400b2a91 in PyEval_EvalFrame () from > /usr/lib/libpython2.4.so.1.0 > #3 0x400b4bc1 in PyEval_EvalCodeEx () from > /usr/lib/libpython2.4.so.1.0 > #4 0x4006b13a in function_call () from > /usr/lib/libpython2.4.so.1.0 > #5 0x40053c37 in PyObject_Call () from > /usr/lib/libpython2.4.so.1.0 > #6 0x4005cedb in instancemethod_call () from > /usr/lib/libpython2.4.so.1.0 > #7 0x40053c37 in PyObject_Call () from > /usr/lib/libpython2.4.so.1.0 > #8 0x4008ec2c in slot_tp_call () from > /usr/lib/libpython2.4.so.1.0 > #9 0x40053c37 in PyObject_Call () from > /usr/lib/libpython2.4.so.1.0 > #10 0x400b1f37 in PyEval_EvalFrame () from > /usr/lib/libpython2.4.so.1.0 > #11 0x400b4bc1 in PyEval_EvalCodeEx () from > /usr/lib/libpython2.4.so.1.0 > #12 0x400b4e95 in PyEval_EvalCode () from > /usr/lib/libpython2.4.so.1.0 > #13 0x400cf618 in run_node () from > /usr/lib/libpython2.4.so.1.0 > #14 0x400d0db3 in PyRun_SimpleFileExFlags () from > /usr/lib/libpython2.4.so.1.0 > #15 0x400d137a in PyRun_AnyFileExFlags () from > /usr/lib/libpython2.4.so.1.0 > #16 0x400d750a in Py_Main () from > /usr/lib/libpython2.4.so.1.0 > #17 0x0804871a in main (argc=1, argv=0x1) at > ccpython.cc:10 > > Cheers, > Nils > > > ------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-user Digest, Vol 53, Issue 28 > ****************************************** > From william.ratcliff at gmail.com Wed Jan 16 19:57:08 2008 From: william.ratcliff at gmail.com (william ratcliff) Date: Wed, 16 Jan 2008 19:57:08 -0500 Subject: [SciPy-user] Memory leak in delaunay interpolator In-Reply-To: References: Message-ID: <827183970801161657p61762a3eqe8f36402eed88fe9@mail.gmail.com> Would there be any interest/problem with allowing error calculations for interpolations done with the Delaunay package? For example if one could send in x,y,z,zerror=myerror (that is, if it's a keyword argument, it won't break existing code) and if zerror is defined, also return an errorbar (calculated in quadrature) based on the neighbors/areas used in the interpolation? As for the patch, will/was it applied to the svn version of scipy? Cheers, William On Jan 16, 2008 4:52 PM, Pauli Virtanen wrote: > Wed, 16 Jan 2008 20:40:33 +0100, Nils Wagner wrote: > > > On Wed, 16 Jan 2008 11:38:02 +0100 > > Rob Hetland wrote: > >> > >> I'm not sure who else uses the delaunay package (was in > >> > >> scipy.sandbox, now lives in scikits), but I find it > >> indispensable. Today I found what appears to be a memory leak in the > >> interpolator and extrapolator objects. This simple code demonstrates > >> the leak: > > This may be the same as bug #382 (patch is available in the ticket). > > Also #376 may be relevant for the observed segfault: there are > (or were?) known crasher bugs in the delaunay module. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Jan 16 20:22:53 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Jan 2008 19:22:53 -0600 Subject: [SciPy-user] Memory leak in delaunay interpolator In-Reply-To: <827183970801161657p61762a3eqe8f36402eed88fe9@mail.gmail.com> References: <827183970801161657p61762a3eqe8f36402eed88fe9@mail.gmail.com> Message-ID: <478EADED.2050707@gmail.com> william ratcliff wrote: > Would there be any interest/problem with allowing error calculations for > interpolations done with the Delaunay package? For example if one could > send in x,y,z,zerror=myerror (that is, if it's a keyword argument, it > won't break existing code) and if zerror is defined, also return an > errorbar (calculated in quadrature) based on the neighbors/areas used in > the interpolation? You can do exactly this calculation with the current API. Just interpolate z=myerror**2 and take the sqrt() of the result. I would prefer to keep the API small and composable rather than try to implement everything that's possibly useful. In particular, I dislike changing the number of return values based on the inputs. It makes code harder to read and generic code harder to write. Feel free to add to the documentation pointing out recipes like this, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From michael.abshoff at googlemail.com Thu Jan 17 00:06:42 2008 From: michael.abshoff at googlemail.com (Michael.Abshoff) Date: Thu, 17 Jan 2008 06:06:42 +0100 Subject: [SciPy-user] Memory leak in delaunay interpolator In-Reply-To: References: Message-ID: <478EE262.3020305@gmail.com> Pauli Virtanen wrote: > Wed, 16 Jan 2008 20:40:33 +0100, Nils Wagner wrote: > >> On Wed, 16 Jan 2008 11:38:02 +0100 >> Rob Hetland wrote: >>> I'm not sure who else uses the delaunay package (was in >>> >>> scipy.sandbox, now lives in scikits), but I find it >>> indispensable. Today I found what appears to be a memory leak in the >>> interpolator and extrapolator objects. This simple code demonstrates >>> the leak: > > This may be the same as bug #382 (patch is available in the ticket). > > Also #376 may be relevant for the observed segfault: there are > (or were?) known crasher bugs in the delaunay module. > Hi, I ran the following code in Sage under valgrind's memcheck [slightly adapted from the previous post]: sage: from delaunay import * sage: from numpy.random import rand sage: xi, yi = rand(2, 1000) sage: tri=Triangulation(xi, yi) sage: for n in range(100000): ....: interp = tri.nn_interpolator(rand(1000)) ....: z = interp(x, y) using today's scipy sandbox checkout. The following popped up: ==16816== 6,944 bytes in 1 blocks are definitely lost in loss record 7,840 of 8,087 ==16816== at 0x4A1BB35: malloc (vg_replace_malloc.c:207) ==16816== by 0x1BFE7829: VoronoiDiagramGenerator::myalloc(unsigned) (VoronoiDiagramGenerator.cpp:725) ==16816== by 0x1BFE788D: VoronoiDiagramGenerator::PQinitialize() (VoronoiDiagramGenerator.cpp:570) ==16816== by 0x1BFE8305: VoronoiDiagramGenerator::voronoi(int) (VoronoiDiagramGenerator.cpp:924) ==16816== by 0x1BFE89DA: VoronoiDiagramGenerator::generateVoronoi(double*, double*, int, double, double, double, double, double) (VoronoiDiagramGenerator.cpp:136) ==16816== by 0x1BFE65D7: delaunay_method (_delaunay.cpp:125) ==16816== by 0x4833C1: PyEval_EvalFrameEx (ceval.c:3564) ==16816== by 0x4852CA: PyEval_EvalCodeEx (ceval.c:2831) ==16816== by 0x4CE817: function_call (funcobject.c:517) ==16816== by 0x415542: PyObject_Call (abstract.c:1860) ==16816== by 0x41BC62: instancemethod_call (classobject.c:2497) ==16816== by 0x415542: PyObject_Call (abstract.c:1860) I am not sure if the patch from #382 has been applied to svn trunk, I am still finding my way around scipy. The other oddity that popped up was: ==16816== at 0x15A89C1D: PyArray_MapIterReset (arrayobject.c:9854) ==16816== by 0x15AC4827: array_subscript (arrayobject.c:2403) ==16816== by 0x15AC4B54: array_subscript_nice (arrayobject.c:3030) ==16816== by 0x47FA38: PyEval_EvalFrameEx (ceval.c:1192) ==16816== by 0x4843CA: PyEval_EvalFrameEx (ceval.c:3650) ==16816== by 0x4852CA: PyEval_EvalCodeEx (ceval.c:2831) ==16816== by 0x4CE817: function_call (funcobject.c:517) ==16816== by 0x415542: PyObject_Call (abstract.c:1860) ==16816== by 0x41BC62: instancemethod_call (classobject.c:2497) ==16816== by 0x415542: PyObject_Call (abstract.c:1860) ==16816== by 0x455F97: slot_tp_init (typeobject.c:4862) ==16816== by 0x458E40: type_call (typeobject.c:436) If you are interested [and aren't already doing it] I could run valgrind on the whole numpy/scipy test suite on a regular basis. I already do that on a weekly basis for the Sage doctests, so adding it wouldn't be too much work once I have the build and testing automated. Cheers, Michael From robert.kern at gmail.com Thu Jan 17 01:10:35 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 Jan 2008 00:10:35 -0600 Subject: [SciPy-user] Memory leak in delaunay interpolator In-Reply-To: <478EE262.3020305@gmail.com> References: <478EE262.3020305@gmail.com> Message-ID: <478EF15B.9050806@gmail.com> Michael.Abshoff wrote: > Pauli Virtanen wrote: >> Wed, 16 Jan 2008 20:40:33 +0100, Nils Wagner wrote: >> >>> On Wed, 16 Jan 2008 11:38:02 +0100 >>> Rob Hetland wrote: >>>> I'm not sure who else uses the delaunay package (was in >>>> >>>> scipy.sandbox, now lives in scikits), but I find it >>>> indispensable. Today I found what appears to be a memory leak in the >>>> interpolator and extrapolator objects. This simple code demonstrates >>>> the leak: >> This may be the same as bug #382 (patch is available in the ticket). >> >> Also #376 may be relevant for the observed segfault: there are >> (or were?) known crasher bugs in the delaunay module. >> > > Hi, > > I ran the following code in Sage under valgrind's memcheck [slightly > adapted from the previous post]: > > sage: from delaunay import * > sage: from numpy.random import rand > sage: xi, yi = rand(2, 1000) > sage: tri=Triangulation(xi, yi) > sage: for n in range(100000): > ....: interp = tri.nn_interpolator(rand(1000)) > ....: z = interp(x, y) > > using today's scipy sandbox checkout. The following popped up: That's no longer valid. It lives here now: http://svn.scipy.org/svn/scikits/trunk/delaunay/ > ==16816== 6,944 bytes in 1 blocks are definitely lost in loss record > 7,840 of 8,087 > ==16816== at 0x4A1BB35: malloc (vg_replace_malloc.c:207) > ==16816== by 0x1BFE7829: VoronoiDiagramGenerator::myalloc(unsigned) > (VoronoiDiagramGenerator.cpp:725) > ==16816== by 0x1BFE788D: VoronoiDiagramGenerator::PQinitialize() > (VoronoiDiagramGenerator.cpp:570) > ==16816== by 0x1BFE8305: VoronoiDiagramGenerator::voronoi(int) > (VoronoiDiagramGenerator.cpp:924) This is not terribly surprising. There are about 4 different memory managers involved here: malloc, Stephan Fortune's, C++'s, and Python's. > ==16816== by 0x1BFE89DA: > VoronoiDiagramGenerator::generateVoronoi(double*, double*, int, double, > double, double, double, > double) (VoronoiDiagramGenerator.cpp:136) > ==16816== by 0x1BFE65D7: delaunay_method (_delaunay.cpp:125) > ==16816== by 0x4833C1: PyEval_EvalFrameEx (ceval.c:3564) > ==16816== by 0x4852CA: PyEval_EvalCodeEx (ceval.c:2831) > ==16816== by 0x4CE817: function_call (funcobject.c:517) > ==16816== by 0x415542: PyObject_Call (abstract.c:1860) > ==16816== by 0x41BC62: instancemethod_call (classobject.c:2497) > ==16816== by 0x415542: PyObject_Call (abstract.c:1860) > > I am not sure if the patch from #382 has been applied to svn trunk, I am > still finding my way around scipy. It got applied in the scikits tree, where it now lives. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From hetland at tamu.edu Thu Jan 17 04:02:02 2008 From: hetland at tamu.edu (Rob Hetland) Date: Thu, 17 Jan 2008 10:02:02 +0100 Subject: [SciPy-user] Memory leak in delaunay interpolator In-Reply-To: <478EF15B.9050806@gmail.com> References: <478EE262.3020305@gmail.com> <478EF15B.9050806@gmail.com> Message-ID: On Jan 17, 2008, at 7:10 AM, Robert Kern wrote: >> >> I am not sure if the patch from #382 has been applied to svn >> trunk, I am >> still finding my way around scipy. > > It got applied in the scikits tree, where it now lives. I just reinstalled the latest scikits.delaunay package, and I am getting a segfault right away, in the first interp statement. With the older version from the scipy.sandbox, I was getting a slow memory leak that would segfault after many (thousands of) interpolator/ interp calls. These code bases are different, primarily in the _delaunay.cpp file. However, my C is bad, and my C++ is worse, so I am useless beyond diffing the files... I just wanted to point out that there are perhaps two separate issues here. -Rob ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From jelleferinga at gmail.com Thu Jan 17 04:08:27 2008 From: jelleferinga at gmail.com (jelle) Date: Thu, 17 Jan 2008 09:08:27 +0000 (UTC) Subject: [SciPy-user] 3D (I'm not talking about rendering) References: Message-ID: Have a look at TVTK (traited VTK ). The underlying Visualization ToolKit is what drives some of the largest visualizations jobs around, and will surely grok that amount of data. Note that there's a very good guide on VTK, which I highly recommend. All the operations you mentioned are well supported in VTK. -jelle From josemaria.alkala at gmail.com Thu Jan 17 04:54:39 2008 From: josemaria.alkala at gmail.com (=?ISO-8859-1?Q?Jos=E9_Mar=EDa_Garc=EDa_P=E9rez?=) Date: Thu, 17 Jan 2008 10:54:39 +0100 Subject: [SciPy-user] 3D (I'm not talking about rendering) In-Reply-To: References: Message-ID: I have started to use TVTK, but just for rendering. How could I ask TVTK to tell me groups of nodes which are less than 0.005 far away ones from each others? Could you write the name of a function that can do that task (or a similar task)? Regards, Jose M. 2008/1/17, jelle : > > Have a look at TVTK (traited VTK ). > > The underlying Visualization ToolKit is what drives some of the largest > visualizations jobs around, and will surely grok that amount of data. > Note that there's a very good guide on VTK, which I highly recommend. > All the operations you mentioned are well supported in VTK. > > -jelle > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Thu Jan 17 05:43:34 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 17 Jan 2008 11:43:34 +0100 Subject: [SciPy-user] 3D (I'm not talking about rendering) In-Reply-To: References: Message-ID: How about: sympy - Google CodeSymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as ... code.google.com/p/sympy/ Maybe even "sage". -Sebastian Haase On Jan 17, 2008 10:54 AM, Jos? Mar?a Garc?a P?rez wrote: > I have started to use TVTK, but just for rendering. How could I ask TVTK to > tell me groups of nodes which are less than 0.005 far away ones from each > others? > > Could you write the name of a function that can do that task (or a similar > task)? > > Regards, > Jose M. > > 2008/1/17, jelle : > > > Have a look at TVTK (traited VTK ). > > > > The underlying Visualization ToolKit is what drives some of the largest > > visualizations jobs around, and will surely grok that amount of data. > > Note that there's a very good guide on VTK, which I highly recommend. > > All the operations you mentioned are well supported in VTK. > > > > -jelle > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From pav at iki.fi Thu Jan 17 06:41:07 2008 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 17 Jan 2008 11:41:07 +0000 (UTC) Subject: [SciPy-user] Memory leak in delaunay interpolator References: <478EE262.3020305@gmail.com> <478EF15B.9050806@gmail.com> Message-ID: Thu, 17 Jan 2008 10:02:02 +0100, Rob Hetland wrote: > On Jan 17, 2008, at 7:10 AM, Robert Kern wrote: > > >>> I am not sure if the patch from #382 has been applied to svn trunk, I >>> am >>> still finding my way around scipy. >> >> It got applied in the scikits tree, where it now lives. > > > I just reinstalled the latest scikits.delaunay package, and I am getting > a segfault right away, in the first interp statement. With the older > version from the scipy.sandbox, I was getting a slow memory leak that > would segfault after many (thousands of) interpolator/ interp calls. > > These code bases are different, primarily in the _delaunay.cpp file. > However, my C is bad, and my C++ is worse, so I am useless beyond > diffing the files... > > I just wanted to point out that there are perhaps two separate issues > here. The current version [1] of scikits.delaunay:_delaunay.cpp:nn_interpolate_unstructured_method appears to XDECREF the variable intz before returning it. Is this a reference count error? [1] http://svn.scipy.org/svn/scikits/trunk/delaunay/scikits/delaunay/ _delaunay.cpp -- Pauli Virtanen From J.Anderson at hull.ac.uk Thu Jan 17 07:30:11 2008 From: J.Anderson at hull.ac.uk (Joseph Anderson) Date: Thu, 17 Jan 2008 12:30:11 -0000 Subject: [SciPy-user] newbie trouble with lfilter Message-ID: Hi All, I'm new to both python and scipy, so it could be that I'm just not seeing what I'm doing wrong here. I'm wanting to use lfilter by setting the initial conditions (zi), and filter multi-dimensional arrays. The notion for this is so that I can repeatedly call a filter on blocks of samples pulled out of a large file--too big to load into a single array in memory. I can get it to work for the case of a one dimensional array, but not managing to put the zi array in the correct form for anything else. Here's my code that works for a 1-d array: # ************************************** # test filter: 1-d array # # ************************************** from numpy import * from numpy.random import * from scipy.signal import * # order and cutoff N = 1 freq = 0.1 # signal, single channel x = uniform(-1., 1., 8) # design filter b, a = butter(N, freq) # set initial condition zi = zeros(N) # filter y, zf = lfilter(b, a, x, 0, zi) print "b, a:", b, a print "zi, zf:", zi, zf print "x:", x print "y:", y Here's code that doesn't work for a 2-d array: # ************************************** # test filter: 2-d array # zi = zeros(N * 2).reshape(2, N) # # ************************************** from numpy import * from numpy.random import * from scipy.signal import * # order and cutoff N = 1 freq = 0.1 # signal, two channels x = array([uniform(-1., 1., 8), zeros(8)]).transpose() # design filter b, a = butter(N, freq) # set initial condition, need N * channels zeros zi = zeros(N * 2).reshape(2, N) # filter y, zf = lfilter(b, a, x, 0, zi) print "b, a:", b, a print "zi, zf:", zi, zf print "x:", x print "y:", y The error I get is: >>> Traceback (most recent call last): File "/var/folders/BZ/BZCZyDCzGYio72sPH9CkP++++TM/-Tmp-/py275R3I", line 58, in y, zf = lfilter(b, a, x, 0, zi) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/signal/signaltools.py", line 495, in lfilter return sigtools._linear_filter(b, a, x, axis, zi) ValueError: The number of initial conditions must be max([len(a),len(b)]) - 1 Trying something different and changing zi to: zi = zeros(N * 2).reshape(N, 2) Results in a crash with the message: Process Python segmentation fault Attempting a slightly different approach, I tried using lfiltic with the 2-dim case, to see what form zi is returned. Here's that code: # ************************************** # test filter: 2-d array # find zi with lfiltic # # ************************************** from numpy import * from numpy.random import * from scipy.signal import * # order and cutoff N = 1 freq = 0.1 # signal, two channels x = array([uniform(-1., 1., 8), zeros(8)]).transpose() # design filter b, a = butter(N, freq) # filter y = lfilter(b, a, x, 0) # # set initial condition, need N * channels zi = empty(N * 2).reshape(2, N) # find zi from initial conditions zi = lfiltic(b, a, y, x) print "b, a:", b, a print "zi:", zi print "x:", x print "y:", y Running results in: >>> Traceback (most recent call last): File "/var/folders/BZ/BZCZyDCzGYio72sPH9CkP++++TM/-Tmp-/py275Uzp", line 133, in zi = lfiltic(b, a, y, x) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/signal/signaltools.py", line 534, in lfiltic zi[m] = sum(b[m+1:]*x[:M-m],axis=0) ValueError: setting an array element with a sequence. Swapping the zi initialization for: zi = empty(N * 2).reshape(N, 2) results in the same error. My system is a G5 running Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04) [GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin. Numpy version is 1.0.4.dev3884.Scipy version is 0.5.3.dev3165. As you can likely imaging, I can take the approach to split all my 'interleaved' 2-dimensional arrays into 1-d arrays and successfully set zi and access the final (zf) state from lfilter. However, as lfilter is happy to filter 2-dim arrays, would like to keep my own code simple by using that feature. It's just that I'm not managing to initially set up zi correctly Thanks in advance for your help. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr Joseph Anderson Lecturer in Music School of Arts and New Media University of Hull, Scarborough Campus, Scarborough, North Yorkshire, YO11 3AZ, UK T: +44.(0)1723.357341 T: +44.(0)1723.357370 F: +44.(0)1723.350815 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From robert.kern at gmail.com Thu Jan 17 13:22:30 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 Jan 2008 12:22:30 -0600 Subject: [SciPy-user] Memory leak in delaunay interpolator In-Reply-To: References: <478EE262.3020305@gmail.com> <478EF15B.9050806@gmail.com> Message-ID: <478F9CE6.2050904@gmail.com> Pauli Virtanen wrote: > Thu, 17 Jan 2008 10:02:02 +0100, Rob Hetland wrote: > >> On Jan 17, 2008, at 7:10 AM, Robert Kern wrote: >> >> >>>> I am not sure if the patch from #382 has been applied to svn trunk, I >>>> am >>>> still finding my way around scipy. >>> It got applied in the scikits tree, where it now lives. >> >> I just reinstalled the latest scikits.delaunay package, and I am getting >> a segfault right away, in the first interp statement. With the older >> version from the scipy.sandbox, I was getting a slow memory leak that >> would segfault after many (thousands of) interpolator/ interp calls. >> >> These code bases are different, primarily in the _delaunay.cpp file. >> However, my C is bad, and my C++ is worse, so I am useless beyond >> diffing the files... >> >> I just wanted to point out that there are perhaps two separate issues >> here. > > The current version [1] of > scikits.delaunay:_delaunay.cpp:nn_interpolate_unstructured_method > appears to XDECREF the variable intz before returning it. Is this a > reference count error? > > [1] http://svn.scipy.org/svn/scikits/trunk/delaunay/scikits/delaunay/ > _delaunay.cpp Yup. Try it now. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Thu Jan 17 13:54:24 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 17 Jan 2008 19:54:24 +0100 Subject: [SciPy-user] Memory leak in delaunay interpolator In-Reply-To: <478F9CE6.2050904@gmail.com> References: <478EE262.3020305@gmail.com> <478EF15B.9050806@gmail.com> <478F9CE6.2050904@gmail.com> Message-ID: Hi Robert, Thank you very much ! IMHO ticket 591 can be closed. http://projects.scipy.org/scipy/scipy/ticket/591 /usr/bin/python test_triangulate.py /usr/lib/python2.4/site-packages/scikits/delaunay/triangulate.py:78: DuplicatePointWarning: Input data contains duplicate x,y points; some values are ignored. DuplicatePointWarning, .....F. ====================================================================== FAIL: test_triangulate.TestSanity.test_circle_condition ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/nose/case.py", line 203, in runTest self.test(*self.arg) File "/home/nwagner/svn/delaunay/tests/test_triangulate.py", line 60, in test_circle_condition assert False, "doesn't work perfectly given floating point imprecision" AssertionError: doesn't work perfectly given floating point imprecision ---------------------------------------------------------------------- Ran 7 tests in 0.272s FAILED (failures=1) Cheers, Nils From hetland at tamu.edu Thu Jan 17 13:59:24 2008 From: hetland at tamu.edu (Rob Hetland) Date: Thu, 17 Jan 2008 19:59:24 +0100 Subject: [SciPy-user] Memory leak in delaunay interpolator In-Reply-To: <478F9CE6.2050904@gmail.com> References: <478EE262.3020305@gmail.com> <478EF15B.9050806@gmail.com> <478F9CE6.2050904@gmail.com> Message-ID: On Jan 17, 2008, at 7:22 PM, Robert Kern wrote: > Try it now. I tried the latest svn scikits.delaunay, and the instant segfault is fixed. Great! However, the original, slow memory leak still seems to be present. -Rob ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From william.ratcliff at gmail.com Thu Jan 17 14:35:50 2008 From: william.ratcliff at gmail.com (william ratcliff) Date: Thu, 17 Jan 2008 14:35:50 -0500 Subject: [SciPy-user] Memory leak in delaunay interpolator In-Reply-To: References: <478EE262.3020305@gmail.com> <478EF15B.9050806@gmail.com> <478F9CE6.2050904@gmail.com> Message-ID: <827183970801171135p41429bcfh1167ba51114f1423@mail.gmail.com> Does it still crash if you give it a degenerate list of points? Cheers, William On Jan 17, 2008 1:59 PM, Rob Hetland wrote: > > On Jan 17, 2008, at 7:22 PM, Robert Kern wrote: > > > Try it now. > > > I tried the latest svn scikits.delaunay, and the instant segfault is > fixed. Great! > > However, the original, slow memory leak still seems to be present. > > -Rob > > ---- > Rob Hetland, Associate Professor > Dept. of Oceanography, Texas A&M University > http://pong.tamu.edu/~rob > phone: 979-458-0096, fax: 979-845-6331 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Jan 17 14:57:35 2008 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 17 Jan 2008 19:57:35 +0000 (UTC) Subject: [SciPy-user] Memory leak in delaunay interpolator References: <478EE262.3020305@gmail.com> <478EF15B.9050806@gmail.com> <478F9CE6.2050904@gmail.com> <827183970801171135p41429bcfh1167ba51114f1423@mail.gmail.com> Message-ID: Thu, 17 Jan 2008 14:35:50 -0500, william ratcliff wrote: > Does it still crash if you give it a degenerate list of points? It removes duplicate points prior to triangulation, and I think the segfault associated with this is now patched by Robert Kern. But there are still datasets that it can't diagonalize (raises KeyError now instead of a hard segfault); I managed to reduce the problematic example in #376 to a simpler one: >>> import scikits.delaunay as d >>> import numpy as np >>> data = np.array([ ... [-1, -1 ], [-1, 0], [-1, 1], ... [ 0, -1 ], [ 0, 0], [ 0, 1], ... [ 1, -1 - np.finfo(np.float_).eps], [ 1, 0], [ 1, 1], ... ]) >>> tri = d.Triangulation(data[:,0], data[:,1]) Traceback (most recent call last): File "", line 1, in File "/home/pauli/koodi/proj/delaunay/main/scikits/delaunay/ triangulate.py", line 90, in __init__ self.hull = self._compute_convex_hull() File "/home/pauli/koodi/proj/delaunay/main/scikits/delaunay/ triangulate.py", line 125, in _compute_convex_hull hull.append(edges.pop(hull[-1])) KeyError: 6 >>> data[6,1] = -1 >>> tri = d.Triangulation(data[:,0], data[:,1]) >>> data[6,1] = -1 - 1e6*np.finfo(np.float_).eps >>> tri = d.Triangulation(data[:,0], data[:,1]) That is, it works only after rounding off the epsilon. Also, it seems to work if instead of eps, 1e6*eps is subtracted from data[6,1]. -- Pauli Virtanen From bryan at cole.uklinux.net Thu Jan 17 15:31:53 2008 From: bryan at cole.uklinux.net (Bryan Cole) Date: Thu, 17 Jan 2008 20:31:53 +0000 Subject: [SciPy-user] 3D (I'm not talking about rendering) In-Reply-To: References: Message-ID: <1200601912.6714.20.camel@pc1.cole.uklinux.net> On Thu, 2008-01-17 at 10:54 +0100, Jos? Mar?a Garc?a P?rez wrote: > I have started to use TVTK, but just for rendering. How could I ask > TVTK to tell me groups of nodes which are less than 0.005 far away > ones from each others? > > Could you write the name of a function that can do that task (or a > similar task)? VTK (or TVTK) is indeed your friend. If you have 300000 points and you want to find all occurences where the point spacing is below a certain threshold you could use the "Gaussian Splatter" filter to find approximate regions of high point density (see http://www.vtk.org/doc/release/5.0/html/a01413.html or maybe the related Sheppard Method). This would give you a measure of the point density on a regular grid. You could sample this using your original points and then filter them based on this point-density function. This would hopefully reduce your point set to something small enough to do an exact find-the-nearest-neighbour routine on (needs a N*N matrix, where N is your number of filterd points). You could use a vtkPointLocator for this but it's probably easier/faster to just write one with python/numpy. If you're in no hurry, you could just apply the vtkPointLocator to all 300000 points. > > Regards, > Jose M. > > 2008/1/17, jelle : > Have a look at TVTK (traited VTK ). > > The underlying Visualization ToolKit is what drives some of > the largest > visualizations jobs around, and will surely grok that amount > of data. > Note that there's a very good guide on VTK, which I highly > recommend. > All the operations you mentioned are well supported in VTK. > > -jelle > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From jelleferinga at gmail.com Thu Jan 17 15:45:45 2008 From: jelleferinga at gmail.com (jelle) Date: Thu, 17 Jan 2008 20:45:45 +0000 (UTC) Subject: [SciPy-user] 3D (I'm not talking about rendering) References: Message-ID: PointLocater is what you're looking for. Make sure you've got an understanding of what Octrees are! From robert.kern at gmail.com Thu Jan 17 16:08:38 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 Jan 2008 15:08:38 -0600 Subject: [SciPy-user] Memory leak in delaunay interpolator In-Reply-To: References: <478EE262.3020305@gmail.com> <478EF15B.9050806@gmail.com> <478F9CE6.2050904@gmail.com> Message-ID: <478FC3D6.4000609@gmail.com> Rob Hetland wrote: > On Jan 17, 2008, at 7:22 PM, Robert Kern wrote: > >> Try it now. > > > I tried the latest svn scikits.delaunay, and the instant segfault is > fixed. Great! > > However, the original, slow memory leak still seems to be present. I cannot reproduce this with the code you gave. Is there another test case that you have available? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu Jan 17 16:21:20 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 Jan 2008 15:21:20 -0600 Subject: [SciPy-user] Memory leak in delaunay interpolator In-Reply-To: References: <478EE262.3020305@gmail.com> <478EF15B.9050806@gmail.com> <478F9CE6.2050904@gmail.com> <827183970801171135p41429bcfh1167ba51114f1423@mail.gmail.com> Message-ID: <478FC6D0.6070606@gmail.com> Pauli Virtanen wrote: > Thu, 17 Jan 2008 14:35:50 -0500, william ratcliff wrote: > >> Does it still crash if you give it a degenerate list of points? > > It removes duplicate points prior to triangulation, and I think the > segfault associated with this is now patched by Robert Kern. Well, you patched it; I checked it in. > But there > are still datasets that it can't diagonalize (raises KeyError now instead > of a hard segfault); I managed to reduce the problematic example in #376 > to a simpler one: > >>>> import scikits.delaunay as d >>>> import numpy as np >>>> data = np.array([ > ... [-1, -1 ], [-1, 0], [-1, 1], > ... [ 0, -1 ], [ 0, 0], [ 0, 1], > ... [ 1, -1 - np.finfo(np.float_).eps], [ 1, 0], [ 1, 1], > ... ]) >>>> tri = d.Triangulation(data[:,0], data[:,1]) > Traceback (most recent call last): > File "", line 1, in > File "/home/pauli/koodi/proj/delaunay/main/scikits/delaunay/ > triangulate.py", line 90, in __init__ > self.hull = self._compute_convex_hull() > File "/home/pauli/koodi/proj/delaunay/main/scikits/delaunay/ > triangulate.py", line 125, in _compute_convex_hull > hull.append(edges.pop(hull[-1])) > KeyError: 6 >>>> data[6,1] = -1 >>>> tri = d.Triangulation(data[:,0], data[:,1]) >>>> data[6,1] = -1 - 1e6*np.finfo(np.float_).eps >>>> tri = d.Triangulation(data[:,0], data[:,1]) > > That is, it works only after rounding off the epsilon. Also, it seems to > work if instead of eps, 1e6*eps is subtracted from data[6,1]. This may be a fundamental limitation of the underlying implementation/algorithm being used here. Steve Fortune's sweepline algorithm cannot be made to use exact geometric predicates with floating point numbers. Other Delaunay triangulation algorithms like the incremental construction and divide-and-conquer can. I have been (slowly) working on implementing these myself using Jon Shewchuk's exact geometric predicates, but this is a low-priority project for me. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rmay at ou.edu Thu Jan 17 16:46:41 2008 From: rmay at ou.edu (Ryan May) Date: Thu, 17 Jan 2008 15:46:41 -0600 Subject: [SciPy-user] Correlate Times? Message-ID: <478FCCC1.9060806@ou.edu> Hey, Can someone explain this to me? In [3]: import scipy as S In [5]: import scipy.signal as SS In [6]: from numpy.random import rand In [7]: up = rand(18000) In [10]: %timeit N.correlate(up,up,mode='full') 10 loops, best of 3: 829 ms per loop In [11]: %timeit S.correlate(up,up,mode='full') 10 loops, best of 3: 827 ms per loop In [12]: %timeit SS.correlate(up,up,mode='full') 10 loops, best of 3: 11.5 s per loop Is this a configuration problem? If not, why does scipy.signal.correlate even exist? Thanks, Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From yennifersantiago at gmail.com Thu Jan 17 19:49:49 2008 From: yennifersantiago at gmail.com (Yennifer Santiago) Date: Thu, 17 Jan 2008 20:49:49 -0400 Subject: [SciPy-user] SciPy Genetic Algorithms Message-ID: <41bc705b0801171649p56d72d63lcb854b869a756f86@mail.gmail.com> Hello... I need to Know where I can obtain information about the operation of the Genetic Algorithms in SciPy. I'm looking for some tutorial about it. Yennifer From john at curioussymbols.com Thu Jan 17 21:03:14 2008 From: john at curioussymbols.com (John Pye) Date: Fri, 18 Jan 2008 13:03:14 +1100 Subject: [SciPy-user] 3D (I'm not talking about rendering) In-Reply-To: References: Message-ID: <479008E2.80109@curioussymbols.com> Hi Jos? Perhaps CGAL? http://cgal-python.gforge.inria.fr/ Or VPython? http://www.vpython.org/ I'd be interested to hear how it goes. Cheers JP Jos? Mar?a Garc?a P?rez wrote: > Hello everybody, > I use scipy to work with bigs datasets. For example, 300000 points. I > want to do geometrical tasks with those points or lines. For example, > I want to look all the points with the same geometrical position or > with similar geometrical position. > > Althought it's possible to do that just using scipy, I'm looking for a > package to do those tasks much more faster that what I get now. > > Besides I would like to do more geometrical things, like projecting or > intersecting lines, ... > > Thanks a lot in advance, > Jose M. > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From cohen at slac.stanford.edu Thu Jan 17 21:12:10 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Thu, 17 Jan 2008 18:12:10 -0800 Subject: [SciPy-user] Correlate Times? In-Reply-To: <478FCCC1.9060806@ou.edu> References: <478FCCC1.9060806@ou.edu> Message-ID: <47900AFA.4050509@slac.stanford.edu> hi Ryan, I see the same effect on my box (I actually lost patience and killed SS). Note that you forgot an "import numpy as N". Anyway : In [15]: ?N.correlate Type: function Base Class: String Form: Namespace: Interactive File: /usr/lib/python2.5/site-packages/numpy/core/numeric.py Definition: N.correlate(a, v, mode='valid') Docstring: Return the discrete, linear correlation of 1-D sequences a and v; mode can be 'valid', 'same', or 'full' to specify the size of the resulting sequence In [16]: ?S.correlate Type: function Base Class: String Form: Namespace: Interactive File: /usr/lib/python2.5/site-packages/numpy/core/numeric.py Definition: S.correlate(a, v, mode='valid') Docstring: Return the discrete, linear correlation of 1-D sequences a and v; mode can be 'valid', 'same', or 'full' to specify the size of the resulting sequence In [19]: ?SS.correlate Type: function Base Class: String Form: Namespace: Interactive File: /usr/lib/python2.5/site-packages/scipy/signal/signaltools.py Definition: SS.correlate(in1, in2, mode='full') Docstring: Cross-correlate two N-dimensional arrays. Description: Cross-correlate in1 and in2 with the output size determined by mode. Inputs: in1 -- an N-dimensional array. in2 -- an array with the same number of dimensions as in1. mode -- a flag indicating the size of the output 'valid' (0): The output consists only of those elements that do not rely on the zero-padding. 'same' (1): The output is the same size as the largest input centered with respect to the 'full' output. 'full' (2): The output is the full discrete linear cross-correlation of the inputs. (Default) Outputs: (out,) out -- an N-dimensional array containing a subset of the discrete linear cross-correlation of in1 with in2. So, S calls N.correlate, and it is a 1D array function, while SS accepts N-dim array. Nevertheless, the cost when the array is 1D is strange for SS, and if it is a feature, it should maybe check array dim and default to the numpy implementation when 1D.... best, Johann Ryan May wrote: > Hey, > > Can someone explain this to me? > > In [3]: import scipy as S > > In [5]: import scipy.signal as SS > > In [6]: from numpy.random import rand > > In [7]: up = rand(18000) > > In [10]: %timeit N.correlate(up,up,mode='full') > 10 loops, best of 3: 829 ms per loop > > In [11]: %timeit S.correlate(up,up,mode='full') > 10 loops, best of 3: 827 ms per loop > > In [12]: %timeit SS.correlate(up,up,mode='full') > 10 loops, best of 3: 11.5 s per loop > > Is this a configuration problem? If not, why does > scipy.signal.correlate even exist? > > Thanks, > > Ryan > > From oliphant at enthought.com Fri Jan 18 00:02:29 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Thu, 17 Jan 2008 23:02:29 -0600 Subject: [SciPy-user] Correlate Times? In-Reply-To: <478FCCC1.9060806@ou.edu> References: <478FCCC1.9060806@ou.edu> Message-ID: <479032E5.1060406@enthought.com> Ryan May wrote: > Hey, > > Can someone explain this to me? > > In [3]: import scipy as S > > In [5]: import scipy.signal as SS > > In [6]: from numpy.random import rand > > In [7]: up = rand(18000) > > In [10]: %timeit N.correlate(up,up,mode='full') > 10 loops, best of 3: 829 ms per loop > > In [11]: %timeit S.correlate(up,up,mode='full') > 10 loops, best of 3: 827 ms per loop > > In [12]: %timeit SS.correlate(up,up,mode='full') > 10 loops, best of 3: 11.5 s per loop > > Is this a configuration problem? If not, why does > scipy.signal.correlate even exist? > scipy.signal.correlate is a N-d correlation algorithm as has been noted. It is going to be slower for 1-d arrays. Now, there is nothing wrong with checking for that case and calling the 1-d version, it's just never been done (probably because people who only need 1-d correlation are already just using numpy.correlate). ndimage also has N-d correlation inside it which was created much later. I think it is currently faster (but with different arguments that I don't fully understand so, I'm not sure what command would be equivalent. Try scipy.ndimage.correlate and see how fast it is. This is part of the kind of clean-up that SciPy really needs. -Travis From josemaria.alkala at gmail.com Fri Jan 18 01:24:50 2008 From: josemaria.alkala at gmail.com (=?ISO-8859-1?Q?Jos=E9_Mar=EDa_Garc=EDa_P=E9rez?=) Date: Fri, 18 Jan 2008 07:24:50 +0100 Subject: [SciPy-user] 3D (I'm not talking about rendering) In-Reply-To: <479008E2.80109@curioussymbols.com> References: <479008E2.80109@curioussymbols.com> Message-ID: Thanks you all, I was taking a look to CGAL. Besides I'll take a look to the suggested functions inside VTK. I'll post comment with my progress. Regards, Jos? M. 2008/1/18, John Pye : > > Hi Jos? > > Perhaps CGAL? > http://cgal-python.gforge.inria.fr/ > > Or VPython? > http://www.vpython.org/ > > I'd be interested to hear how it goes. > > Cheers > JP > > Jos? Mar?a Garc?a P?rez wrote: > > Hello everybody, > > I use scipy to work with bigs datasets. For example, 300000 points. I > > want to do geometrical tasks with those points or lines. For example, > > I want to look all the points with the same geometrical position or > > with similar geometrical position. > > > > Althought it's possible to do that just using scipy, I'm looking for a > > package to do those tasks much more faster that what I get now. > > > > Besides I would like to do more geometrical things, like projecting or > > intersecting lines, ... > > > > Thanks a lot in advance, > > Jose M. > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cohen at slac.stanford.edu Fri Jan 18 01:56:01 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Thu, 17 Jan 2008 22:56:01 -0800 Subject: [SciPy-user] Correlate Times? In-Reply-To: <479032E5.1060406@enthought.com> References: <478FCCC1.9060806@ou.edu> <479032E5.1060406@enthought.com> Message-ID: <47904D81.1050009@slac.stanford.edu> hi, talking about cleanup, I see that the delaunay package is duplicated in scipy.sandbox and scikits. The files seem fairly identical.... best, Johann Travis E. Oliphant wrote: > Ryan May wrote: > >> Hey, >> >> Can someone explain this to me? >> >> In [3]: import scipy as S >> >> In [5]: import scipy.signal as SS >> >> In [6]: from numpy.random import rand >> >> In [7]: up = rand(18000) >> >> In [10]: %timeit N.correlate(up,up,mode='full') >> 10 loops, best of 3: 829 ms per loop >> >> In [11]: %timeit S.correlate(up,up,mode='full') >> 10 loops, best of 3: 827 ms per loop >> >> In [12]: %timeit SS.correlate(up,up,mode='full') >> 10 loops, best of 3: 11.5 s per loop >> >> Is this a configuration problem? If not, why does >> scipy.signal.correlate even exist? >> >> > > scipy.signal.correlate is a N-d correlation algorithm as has been > noted. It is going to be slower for 1-d arrays. Now, there is nothing > wrong with checking for that case and calling the 1-d version, it's just > never been done (probably because people who only need 1-d correlation > are already just using numpy.correlate). > > ndimage also has N-d correlation inside it which was created much > later. I think it is currently faster (but with different arguments > that I don't fully understand so, I'm not sure what command would be > equivalent. > > Try scipy.ndimage.correlate and see how fast it is. > > This is part of the kind of clean-up that SciPy really needs. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Fri Jan 18 02:13:33 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 18 Jan 2008 01:13:33 -0600 Subject: [SciPy-user] Correlate Times? In-Reply-To: <47904D81.1050009@slac.stanford.edu> References: <478FCCC1.9060806@ou.edu> <479032E5.1060406@enthought.com> <47904D81.1050009@slac.stanford.edu> Message-ID: <4790519D.7050600@gmail.com> Johann Cohen-Tanugi wrote: > hi, > talking about cleanup, I see that the delaunay package is duplicated in > scipy.sandbox and scikits. The files seem fairly identical.... I deleted the sandbox version yesterday. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Fri Jan 18 02:27:10 2008 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 18 Jan 2008 09:27:10 +0200 Subject: [SciPy-user] newbie trouble with lfilter In-Reply-To: References: Message-ID: <20080118072710.GK28865@mentat.za.net> Hi Joseph On Thu, Jan 17, 2008 at 12:30:11PM -0000, Joseph Anderson wrote: > Trying something different and changing zi to: > > zi = zeros(N * 2).reshape(N, 2) > > Results in a crash with the message: > > Process Python segmentation fault This should never happen. A bug is filed at http://projects.scipy.org/scipy/scipy/ticket/331 Regards St?fan From grh at mur.at Fri Jan 18 03:10:41 2008 From: grh at mur.at (Georg Holzmann) Date: Fri, 18 Jan 2008 09:10:41 +0100 Subject: [SciPy-user] Correlate Times? In-Reply-To: <479032E5.1060406@enthought.com> References: <478FCCC1.9060806@ou.edu> <479032E5.1060406@enthought.com> Message-ID: <47905F01.2050104@mur.at> Hallo! > scipy.signal.correlate is a N-d correlation algorithm as has been > noted. It is going to be slower for 1-d arrays. Now, there is nothing > wrong with checking for that case and calling the 1-d version, it's just > never been done (probably because people who only need 1-d correlation > are already just using numpy.correlate). It did not check the code now, but aren't all this correlation method calculated in time domain ? I was quite confused some months ago when trying to calculate correlation of bigger data and then implemented an FFT version. So I think it would be useful if (at least in scipy.signal) it automatically calculates the correlation in frequency domain with bigger data - AFAIK that's also the way it is done in matlab ... LG Georg From matthieu.brucher at gmail.com Fri Jan 18 04:25:43 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 18 Jan 2008 10:25:43 +0100 Subject: [SciPy-user] 2D and 3D moments of an image (Hu, Zernike, ...) Message-ID: Hi, Does someone know a package in Python that could extract these informations from an image (2D or 3D) ? I'm searching right now, but couldn't find anything at the moment. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From jelleferinga at gmail.com Fri Jan 18 04:54:10 2008 From: jelleferinga at gmail.com (jelle feringa) Date: Fri, 18 Jan 2008 09:54:10 +0000 (UTC) Subject: [SciPy-user] 2D and 3D moments of an image (Hu, Zernike, ...) References: Message-ID: Sure, for simple straightforward image processing you can use PIL. If you need to know apply advanced image analysis, WrapITK is your friend. -jelle From gruben at bigpond.net.au Fri Jan 18 06:55:35 2008 From: gruben at bigpond.net.au (Gary Ruben) Date: Fri, 18 Jan 2008 22:55:35 +1100 Subject: [SciPy-user] 2D and 3D moments of an image (Hu, Zernike, ...) In-Reply-To: References: Message-ID: <479093B7.2030100@bigpond.net.au> Hi Matthieu, OpenCv may get you part of the way . I haven't used it but I notice that the Python wrappers are packaged in Ubuntu Linux . If they wrap everything, they'll give you the Hu moments. Gary R. Matthieu Brucher wrote: > Hi, > > Does someone know a package in Python that could extract these > informations from an image (2D or 3D) ? I'm searching right now, but > couldn't find anything at the moment. > > Matthieu > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher From matthieu.brucher at gmail.com Fri Jan 18 08:15:25 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 18 Jan 2008 14:15:25 +0100 Subject: [SciPy-user] 2D and 3D moments of an image (Hu, Zernike, ...) In-Reply-To: References: Message-ID: Thanks for the tips, but ITK seems only to support the usual moments, there are no traces of the more complex invariants. And OpenCV seems to be 2D only (like PIL). Other ideas ? Matthieu 2008/1/18, jelle feringa : > > Sure, for simple straightforward image processing you can use PIL. > If you need to know apply advanced image analysis, WrapITK is your friend. > > -jelle > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From w.richert at gmx.net Fri Jan 18 09:53:13 2008 From: w.richert at gmx.net (Willi Richert) Date: Fri, 18 Jan 2008 15:53:13 +0100 Subject: [SciPy-user] timeseries import error Message-ID: <200801181553.13213.w.richert@gmx.net> Hallo, Hi, I've installed timeseries from the sandbox package. Having the latest scipy/numpy versions I get: In [1]: from timeseries.lib.moving_funcs import mov_average_expw as mov_avg --------------------------------------------------------------------------- Traceback (most recent call last) /home/wr/wlan/ in () /usr/lib/python2.5/site-packages/timeseries/__init__.py in () 13 14 import const ---> 15 import dates 16 from dates import * 17 import tseries /usr/lib/python2.5/site-packages/timeseries/dates.py in () 26 from numpy.core.numerictypes import generic 27 ---> 28 import maskedarray as MA 29 30 from parser import DateFromString, DateTimeFromString /usr/lib/python2.5/site-packages/maskedarray/__init__.py in () 15 from core import * 16 ---> 17 import extras 18 from extras import * 19 /usr/lib/python2.5/site-packages/maskedarray/extras.py in () 36 from numpy.core.fromnumeric import asarray as nxasarray 37 ---> 38 from numpy.lib.index_tricks import AxisConcatenator 39 import numpy.lib.function_base as function_base 40 Regards, wr From mattknox_ca at hotmail.com Fri Jan 18 10:53:46 2008 From: mattknox_ca at hotmail.com (Matt Knox) Date: Fri, 18 Jan 2008 15:53:46 +0000 (UTC) Subject: [SciPy-user] timeseries import error References: <200801181553.13213.w.richert@gmx.net> Message-ID: Willi Richert gmx.net> writes: > > Hallo, > > Hi, > > I've installed timeseries from the sandbox package. Having the latest scipy/numpy versions I get: > > In [1]: from timeseries.lib.moving_funcs import mov_average_expw as mov_avg > --------------------------------------------------------------------------- > Traceback (most recent call last) > > /home/wr/wlan/?ipython console> in () > > /usr/lib/python2.5/site-packages/timeseries/__init__.py in () > 13 > 14 import const > ---> 15 import dates > 16 from dates import * > 17 import tseries > > /usr/lib/python2.5/site-packages/timeseries/dates.py in () > 26 from numpy.core.numerictypes import generic > 27 > ---> 28 import maskedarray as MA > 29 > 30 from parser import DateFromString, DateTimeFromString > > /usr/lib/python2.5/site-packages/maskedarray/__init__.py in () > 15 from core import * > 16 > ---> 17 import extras > 18 from extras import * > 19 > > /usr/lib/python2.5/site-packages/maskedarray/extras.py in () > 36 from numpy.core.fromnumeric import asarray as nxasarray > 37 > ---> 38 from numpy.lib.index_tricks import AxisConcatenator > 39 import numpy.lib.function_base as function_base > 40 > > Regards, > wr > Not sure off the top of my head what is causing that (I'll try and take a look early next week if Pierre doesn't beat me to it)... but a general word of caution: it will probably be rough sailing for the timeseries module until 2 things happen... 1. maskedarray merging into numpy is complete 2. timeseries is ported to a scikit Both of these things will *probably* happen within the next couple of months. In the mean time, sorry for the headaches. - Matt From david at ar.media.kyoto-u.ac.jp Fri Jan 18 10:46:52 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 19 Jan 2008 00:46:52 +0900 Subject: [SciPy-user] [ANN] pysamplerate: available in scikits, as samplerate Message-ID: <4790C9EC.3000102@ar.media.kyoto-u.ac.jp> Hi, A quick announcement: pysamplerate is finally available in the scikits as I said I would do for months; I changed the name to samplerate. The only change since last time I touched it is the conversion to setuptools (necessary for scikits). samplerate is a really simple wrapper around SRC, to do high quality audio resampling, and is licensed under the GPL. cheers, David From david.huard at gmail.com Fri Jan 18 12:09:02 2008 From: david.huard at gmail.com (David Huard) Date: Fri, 18 Jan 2008 12:09:02 -0500 Subject: [SciPy-user] timeseries import error In-Reply-To: <200801181553.13213.w.richert@gmx.net> References: <200801181553.13213.w.richert@gmx.net> Message-ID: <91cf711d0801180909q7babc52dk9cd19529f888e876@mail.gmail.com> Hi Willi, There are a number of conflicts between maskedarray, timeseries and scipy. This is my understanding of the situation: The current timeseries depends on scipy.sandbox.maskedarray, but as you have seen, it doesnt't work so well. timeseries can be easily modified to depend on the ma module in the numpy maskedarray branch, but since the branch dates from about two months, the latest changes to numpy haven't been merged and scipy is currently imcompatible with the numpy's maskedarray branch. As a consequence, timeseries (depending on scipy.io for instance) won't import. What seems to work is to revert to a previous version of scipy (I used the latest binary release for my distro). So you can try to 1. install the numpy masked array branch. 2. Install a binary scipy release 3. Apply the attached patch to timeseries, converting import from scipy.sandbox.maskedarray to numpy.ma Good luck, David 2008/1/18, Willi Richert : > > Hallo, > > Hi, > > I've installed timeseries from the sandbox package. Having the latest > scipy/numpy versions I get: > > In [1]: from timeseries.lib.moving_funcs import mov_average_expw as > mov_avg > --------------------------------------------------------------------------- > > Traceback (most recent call > last) > > /home/wr/wlan/ in () > > /usr/lib/python2.5/site-packages/timeseries/__init__.py in () > 13 > 14 import const > ---> 15 import dates > 16 from dates import * > 17 import tseries > > /usr/lib/python2.5/site-packages/timeseries/dates.py in () > 26 from numpy.core.numerictypes import generic > 27 > ---> 28 import maskedarray as MA > 29 > 30 from parser import DateFromString, DateTimeFromString > > /usr/lib/python2.5/site-packages/maskedarray/__init__.py in () > 15 from core import * > 16 > ---> 17 import extras > 18 from extras import * > 19 > > /usr/lib/python2.5/site-packages/maskedarray/extras.py in () > 36 from numpy.core.fromnumeric import asarray as nxasarray > 37 > ---> 38 from numpy.lib.index_tricks import AxisConcatenator > 39 import numpy.lib.function_base as function_base > 40 > > > Regards, > wr > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ma_branch.patch Type: text/x-patch Size: 12072 bytes Desc: not available URL: From R.Springuel at umit.maine.edu Fri Jan 18 14:23:11 2008 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Fri, 18 Jan 2008 14:23:11 -0500 Subject: [SciPy-user] apply_along_axis but with multiple arguments Message-ID: <4790FC9F.7000208@umit.maine.edu> I've written my own function that works on an array and am attempting to add axis control to that function so that it works similar to mean, sum, etc. which already exist in numpy. Said function, however, has the option of weighting each point differently when doing its calculations according to an optional weights argument (similar to average). Thus, in adding axis control to the function, I have to pull sub-arrays from two arrays simultaneously (the data and weights), and do so in such a way that they correspond to each other appropriately. I've thought about using apply_along_axis to do this, but it seems that the function doesn't allow the simultaneous variation of two arrays, only the variation of one. Can anyone shed some light on how I might do this? Since I'm not sure the above is totally clear, let me try putting it this way too: Let's assume for the moment that the average function in numpy didn't have axis control built into it. If I wanted to take the weighted average of an array along a certain axis could I use apply_along_axis in such a way that both a (the data) and weights are varied simultaneously along that axis? I.e. I'm looking for a way to use apply_along_axis to duplicate average(a,axis,weights) using average(a,weights) in the same way that apply_along_axis(average,axis,a) duplicates average(a,axis). Would apply_over_axes or vectorize be more appropriate, and if so how? -- R. Padraic Springuel Teaching Assistant Department of Physics and Astronomy University of Maine Bennett 309 Office Hours: By appointment only From rmay at ou.edu Fri Jan 18 14:34:52 2008 From: rmay at ou.edu (Ryan May) Date: Fri, 18 Jan 2008 13:34:52 -0600 Subject: [SciPy-user] Correlate Times? In-Reply-To: <479032E5.1060406@enthought.com> References: <478FCCC1.9060806@ou.edu> <479032E5.1060406@enthought.com> Message-ID: <4790FF5C.8000502@ou.edu> Travis E. Oliphant wrote: > scipy.signal.correlate is a N-d correlation algorithm as has been > noted. It is going to be slower for 1-d arrays. Now, there is nothing > wrong with checking for that case and calling the 1-d version, it's just > never been done (probably because people who only need 1-d correlation > are already just using numpy.correlate). Since I was doing some signal processing stuff, I figured scipy.signal.correlate might work better, but obviously that was incorrect. > ndimage also has N-d correlation inside it which was created much > later. I think it is currently faster (but with different arguments > that I don't fully understand so, I'm not sure what command would be > equivalent. > > Try scipy.ndimage.correlate and see how fast it is. It looks like you can make it work like: data2 = N.r_[data, N.zeros_like(data)] scipy.ndimage.correlate1d(data2, data2, mode='constant') On my machine however, timeit gives: 10 loops, best of 3: 4.62 s per loop Which puts it between scipy.signal and straight up Numpy. > > This is part of the kind of clean-up that SciPy really needs. Clearly. I know what I really need is a 1D correlation routine that will run on an ND array. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From hetland at tamu.edu Fri Jan 18 14:39:32 2008 From: hetland at tamu.edu (Rob Hetland) Date: Fri, 18 Jan 2008 20:39:32 +0100 Subject: [SciPy-user] Memory leak in delaunay interpolator In-Reply-To: <478FC3D6.4000609@gmail.com> References: <478EE262.3020305@gmail.com> <478EF15B.9050806@gmail.com> <478F9CE6.2050904@gmail.com> <478FC3D6.4000609@gmail.com> Message-ID: On Jan 17, 2008, at 10:08 PM, Robert Kern wrote: > I cannot reproduce this with the code you gave. Is there another > test case that > you have available? Robert- Do you mean that it does not crash, or does not leak? I think that it might take quite a while to actually crash, especially if you have a nice machine with lots of memory... -Rob ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From robert.kern at gmail.com Fri Jan 18 14:52:32 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 18 Jan 2008 13:52:32 -0600 Subject: [SciPy-user] Memory leak in delaunay interpolator In-Reply-To: References: <478EE262.3020305@gmail.com> <478EF15B.9050806@gmail.com> <478F9CE6.2050904@gmail.com> <478FC3D6.4000609@gmail.com> Message-ID: <47910380.5060608@gmail.com> Rob Hetland wrote: > On Jan 17, 2008, at 10:08 PM, Robert Kern wrote: > >> I cannot reproduce this with the code you gave. Is there another >> test case that >> you have available? > > Robert- > > Do you mean that it does not crash, or does not leak? > > I think that it might take quite a while to actually crash, > especially if you have a nice machine with lots of memory... Its memory usage remains constant on my OS X machine. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From saintmlx at apstat.com Fri Jan 18 16:44:51 2008 From: saintmlx at apstat.com (Xavier Saint-Mleux) Date: Fri, 18 Jan 2008 16:44:51 -0500 Subject: [SciPy-user] [ANN] pysamplerate: available in scikits, as samplerate In-Reply-To: <4790C9EC.3000102@ar.media.kyoto-u.ac.jp> References: <4790C9EC.3000102@ar.media.kyoto-u.ac.jp> Message-ID: <47911DD3.4040608@apstat.com> Hi David, I just tried to install your samplerate package and it seems to have a bootstrap problem: setup.py tries to import scikits/samplerate/pysamplerate.py before actually building it. e.g.: Traceback (most recent call last): File "setup.py", line 26, in ? from scikits.samplerate.info import _C_SRC_MAJ_VERSION as SAMPLERATE_MAJ_VERSION File "[...]/scikits/trunk/samplerate/scikits/samplerate/__init__.py", line 6, in ? from pysamplerate import resample, converter_format ImportError: No module named pysamplerate Am I missing something? Thanks, Xavier Saint-Mleux David Cournapeau wrote: > Hi, > > A quick announcement: pysamplerate is finally available in the > scikits as I said I would do for months; I changed the name to > samplerate. The only change since last time I touched it is the > conversion to setuptools (necessary for scikits). > samplerate is a really simple wrapper around SRC, to do high quality > audio resampling, and is licensed under the GPL. > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From J.Anderson at hull.ac.uk Fri Jan 18 16:50:33 2008 From: J.Anderson at hull.ac.uk (Joseph Anderson) Date: Fri, 18 Jan 2008 21:50:33 -0000 Subject: [SciPy-user] newbie trouble with lfilter References: <20080118072710.GK28865@mentat.za.net> Message-ID: Hello St?fan, Thanks for the reply on this. As I look at Ticket #331, this doesn't exactly describe the situation I'm encountering. My problem involves supplying and recovering the state (for a 2-d array). At least for a 2-d array (which is what I'm interested in for my problem), lfilter appears to work correctly, given the condition that one doesn't want to supply the initial state (zi) and recover the final state (zf) of the filter. The code listed below seems to work, giving the expected result. For my application, however, it is important for me to be able to supply and recover the state. I'm presuming, then, that the way forward for the immediate future is to just unwind my 2-d array by hand and process as 1-d arrays. Ah, well, but for the state problem, was working fine for me, otherwise. # ************************************** # test filter: 2-d array # # ************************************** from numpy import * from numpy.random import * from scipy.signal import * # order and cutoff N = 1 freq = 0.1 # signal, two channels x = array([uniform(-1., 1., 8), zeros(8)]).transpose() # design filter b, a = butter(N, freq) # filter y = lfilter(b, a, x, 0) print "b, a:", b, a print "x:", x print "y:", y My regards, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr Joseph Anderson Lecturer in Music School of Arts and New Media University of Hull, Scarborough Campus, Scarborough, North Yorkshire, YO11 3AZ, UK T: +44.(0)1723.357341 T: +44.(0)1723.357370 F: +44.(0)1723.350815 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -----Original Message----- From: scipy-user-bounces at scipy.org on behalf of Stefan van der Walt Sent: Fri 01/18/2008 7:27 AM To: SciPy Users List Subject: Re: [SciPy-user] newbie trouble with lfilter Hi Joseph On Thu, Jan 17, 2008 at 12:30:11PM -0000, Joseph Anderson wrote: > Trying something different and changing zi to: > > zi = zeros(N * 2).reshape(N, 2) > > Results in a crash with the message: > > Process Python segmentation fault This should never happen. A bug is filed at http://projects.scipy.org/scipy/scipy/ticket/331 Regards St?fan _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3853 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From Consult at KandA-Ltd.com Fri Jan 18 17:30:51 2008 From: Consult at KandA-Ltd.com (Kenneth Kalan) Date: Fri, 18 Jan 2008 16:30:51 -0600 (CST) Subject: [SciPy-user] Compiling scipy on RHEL 5 x64 Message-ID: <1257.129.105.205.89.1200695451.squirrel@mail.kxs.net> I'm having problems compiling scipy on a RHEL 5 64 bit box. For by 32 bit boxes, I've used the rpms from Ashigabou Repository. I've removed the stock blas and lapack. I've built Atlas, lapack3, refblas3, ufsparse and python-numpy. versions (from src.rpm): lapack3-3.0.18.1 python-numpy-1.0.4-3.1 refblas3-3.0.9.1 ufsparse-2.1.1-1-fc6 atlas-3.7.33.8.1 scipy first complained about not finding the umfpack header files. Fixed that and here is an excerpt from where it is failing now. -- build/src.linux-x86_64-2.4/scipy/linsolve/umfpack/_umfpack_wrap.c:6189: error: too few arguments to function '__builtin_object_size' build/src.linux-x86_64-2.4/scipy/linsolve/umfpack/_umfpack_wrap.c:6189: error: expected expression before ')' token build/src.linux-x86_64-2.4/scipy/linsolve/umfpack/_umfpack_wrap.c:6189: error: too few arguments to function '__builtin___memcpy_chk' build/src.linux-x86_64-2.4/scipy/linsolve/umfpack/_umfpack_wrap.c:6189: error: expected expression before ')' token build/src.linux-x86_64-2.4/scipy/linsolve/umfpack/_umfpack_wrap.c:6189: error: too few arguments to function '__memcpy_ichk' error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC -DSCIPY_UMFPACK_H -DNO_ATLAS_INFO=1 -I/usr/include -I/usr/lib64/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c build/src.linux-x86_64-2.4/scipy/linsolve/umfpack/_umfpack_wrap.c -o build/temp.linux-x86_64-2.4/build/src.linux-x86_64-2.4/scipy/linsolve/umfpack/_umfpack_wrap.o" failed with exit status 1 error: Bad exit status from /var/tmp/rpm-tmp.45378 (%build) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.45378 (%build) -- There are several of those error's, this is just the end part of the build. Any pointers would be appreciated. Thanks, Ken From stefan at sun.ac.za Fri Jan 18 18:20:49 2008 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 19 Jan 2008 01:20:49 +0200 Subject: [SciPy-user] timeseries import error In-Reply-To: <91cf711d0801180909q7babc52dk9cd19529f888e876@mail.gmail.com> References: <200801181553.13213.w.richert@gmx.net> <91cf711d0801180909q7babc52dk9cd19529f888e876@mail.gmail.com> Message-ID: <20080118232049.GE19790@mentat.za.net> Hi David On Fri, Jan 18, 2008 at 12:09:02PM -0500, David Huard wrote: > The current timeseries depends on scipy.sandbox.maskedarray, but as you have > seen, it doesnt't work so well. > timeseries can be easily modified to depend on the ma module in the numpy > maskedarray branch, but since the branch dates from about two months, the > latest changes to numpy haven't been merged and scipy is currently > imcompatible with the numpy's maskedarray branch. As a consequence, timeseries > (depending on scipy.io for instance) won't import. I have merged the latest numpy changes into the maskedarray branch. Please try it again, and report any problems. Thanks St?fan From david at ar.media.kyoto-u.ac.jp Sat Jan 19 00:16:36 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 19 Jan 2008 14:16:36 +0900 Subject: [SciPy-user] [ANN] pysamplerate: available in scikits, as samplerate In-Reply-To: <47911DD3.4040608@apstat.com> References: <4790C9EC.3000102@ar.media.kyoto-u.ac.jp> <47911DD3.4040608@apstat.com> Message-ID: <479187B4.8080407@ar.media.kyoto-u.ac.jp> Xavier Saint-Mleux wrote: > Hi David, > > I just tried to install your samplerate package and it seems to have a > bootstrap problem: setup.py tries to import > scikits/samplerate/pysamplerate.py before actually building it. > > e.g.: > Traceback (most recent call last): > File "setup.py", line 26, in ? > from scikits.samplerate.info import _C_SRC_MAJ_VERSION as > SAMPLERATE_MAJ_VERSION > File "[...]/scikits/trunk/samplerate/scikits/samplerate/__init__.py", > line 6, in ? > from pysamplerate import resample, converter_format > ImportError: No module named pysamplerate > > Am I missing something? > I Will look into it, thanks. cheers, David From david at ar.media.kyoto-u.ac.jp Sat Jan 19 01:05:11 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 19 Jan 2008 15:05:11 +0900 Subject: [SciPy-user] [ANN] pysamplerate: available in scikits, as samplerate In-Reply-To: <479187B4.8080407@ar.media.kyoto-u.ac.jp> References: <4790C9EC.3000102@ar.media.kyoto-u.ac.jp> <47911DD3.4040608@apstat.com> <479187B4.8080407@ar.media.kyoto-u.ac.jp> Message-ID: <47919317.2030408@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > Xavier Saint-Mleux wrote: > >> Hi David, >> >> I just tried to install your samplerate package and it seems to have a >> bootstrap problem: setup.py tries to import >> scikits/samplerate/pysamplerate.py before actually building it. >> >> e.g.: >> Traceback (most recent call last): >> File "setup.py", line 26, in ? >> from scikits.samplerate.info import _C_SRC_MAJ_VERSION as >> SAMPLERATE_MAJ_VERSION >> File "[...]/scikits/trunk/samplerate/scikits/samplerate/__init__.py", >> line 6, in ? >> from pysamplerate import resample, converter_format >> ImportError: No module named pysamplerate >> >> Am I missing something? >> >> Should be solved, now. cheers, David From ryanlists at gmail.com Sat Jan 19 10:14:27 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 19 Jan 2008 09:14:27 -0600 Subject: [SciPy-user] 'NullTester' object has no attribute 'bench' Message-ID: I updated from svn last night and something broke. import scipy produces the following: In [1]: import scipy --------------------------------------------------------------------------- Traceback (most recent call last) /home/ryan/ in () /usr/lib/python2.5/site-packages/scipy/__init__.py in () 65 from testing.pkgtester import Tester 66 test = Tester().test ---> 67 bench = Tester().bench 68 __doc__ += """ 69 Am I missing a dependency or something? Thanks, Ryan From ryanlists at gmail.com Sat Jan 19 10:20:39 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 19 Jan 2008 09:20:39 -0600 Subject: [SciPy-user] 'NullTester' object has no attribute 'bench' In-Reply-To: References: Message-ID: FYI, I have checked out revision 3850. On Jan 19, 2008 9:14 AM, Ryan Krauss wrote: > I updated from svn last night and something broke. > > import scipy > > produces the following: > > In [1]: import scipy > --------------------------------------------------------------------------- > Traceback (most recent call last) > > /home/ryan/ in () > > /usr/lib/python2.5/site-packages/scipy/__init__.py in () > 65 from testing.pkgtester import Tester > 66 test = Tester().test > ---> 67 bench = Tester().bench > 68 __doc__ += """ > 69 > > > Am I missing a dependency or something? > > Thanks, > > Ryan > From matthew.brett at gmail.com Sat Jan 19 10:40:14 2008 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 19 Jan 2008 15:40:14 +0000 Subject: [SciPy-user] 'NullTester' object has no attribute 'bench' In-Reply-To: References: Message-ID: <1e2af89e0801190740o530ba8adm82ed7f6800bf1315@mail.gmail.com> Oh whoops, my fault. I've fixed in SVN. Matthew On Jan 19, 2008 3:14 PM, Ryan Krauss wrote: > I updated from svn last night and something broke. > > import scipy > > produces the following: > > In [1]: import scipy > --------------------------------------------------------------------------- > Traceback (most recent call last) > > /home/ryan/ in () > > /usr/lib/python2.5/site-packages/scipy/__init__.py in () > 65 from testing.pkgtester import Tester > 66 test = Tester().test > ---> 67 bench = Tester().bench > 68 __doc__ += """ > 69 > > > Am I missing a dependency or something? > > Thanks, > > Ryan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Sat Jan 19 11:56:42 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 19 Jan 2008 10:56:42 -0600 Subject: [SciPy-user] 'NullTester' object has no attribute 'bench' In-Reply-To: <1e2af89e0801190740o530ba8adm82ed7f6800bf1315@mail.gmail.com> References: <1e2af89e0801190740o530ba8adm82ed7f6800bf1315@mail.gmail.com> Message-ID: Thanks. I will rebuild and try again (in a few minutes). On 1/19/08, Matthew Brett wrote: > Oh whoops, my fault. > > I've fixed in SVN. > > Matthew > > On Jan 19, 2008 3:14 PM, Ryan Krauss wrote: > > I updated from svn last night and something broke. > > > > import scipy > > > > produces the following: > > > > In [1]: import scipy > > --------------------------------------------------------------------------- > > Traceback (most recent call last) > > > > /home/ryan/ in () > > > > /usr/lib/python2.5/site-packages/scipy/__init__.py in () > > 65 from testing.pkgtester import Tester > > 66 test = Tester().test > > ---> 67 bench = Tester().bench > > 68 __doc__ += """ > > 69 > > > > > > Am I missing a dependency or something? > > > > Thanks, > > > > Ryan > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Sat Jan 19 12:16:41 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 19 Jan 2008 11:16:41 -0600 Subject: [SciPy-user] 'NullTester' object has no attribute 'bench' In-Reply-To: References: <1e2af89e0801190740o530ba8adm82ed7f6800bf1315@mail.gmail.com> Message-ID: Thanks again Matthew. I am back in business. scipy imports correctly and I seem to have normal functionality. scipy.test() prints this fairly polite message: In [2]: scipy.test() --------------------------------------------------------------------------- Traceback (most recent call last) /home/ryan/ in () /usr/lib/python2.5/site-packages/scipy/testing/nulltester.py in test(self, labels, *args, **kwargs) 13 pass 14 def test(self, labels=None, *args, **kwargs): ---> 15 raise ImportError, 'Need nose for tests - see %s' % nose_url 16 def bench(self, labels=None, *args, **kwargs): 17 raise ImportError, 'Need nose for benchmarks - see %s' % nose_url : Need nose for tests - see http://somethingaboutorange.com/mrl/projects/nose which does what it is supposed to and tells me to install the nose package. I will do that later. Thanks again, Ryan On Jan 19, 2008 10:56 AM, Ryan Krauss wrote: > Thanks. I will rebuild and try again (in a few minutes). > > > On 1/19/08, Matthew Brett wrote: > > Oh whoops, my fault. > > > > I've fixed in SVN. > > > > Matthew > > > > On Jan 19, 2008 3:14 PM, Ryan Krauss wrote: > > > I updated from svn last night and something broke. > > > > > > import scipy > > > > > > produces the following: > > > > > > In [1]: import scipy > > > --------------------------------------------------------------------------- > > > Traceback (most recent call last) > > > > > > /home/ryan/ in () > > > > > > /usr/lib/python2.5/site-packages/scipy/__init__.py in () > > > 65 from testing.pkgtester import Tester > > > 66 test = Tester().test > > > ---> 67 bench = Tester().bench > > > 68 __doc__ += """ > > > 69 > > > > > > > > > Am I missing a dependency or something? > > > > > > Thanks, > > > > > > Ryan > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From hetland at tamu.edu Sat Jan 19 20:16:50 2008 From: hetland at tamu.edu (Rob Hetland) Date: Sun, 20 Jan 2008 02:16:50 +0100 Subject: [SciPy-user] numpy array in ctype struct In-Reply-To: <20080115130429.F462016@jazz.ncnr.nist.gov> References: <20080115070726.A460060@jazz.ncnr.nist.gov> <478CCA3A.5060505@enthought.com> <20080115130429.F462016@jazz.ncnr.nist.gov> Message-ID: <3F8F685B-DFA0-4771-AEE9-A34C051B940E@tamu.edu> On Jan 15, 2008, at 7:04 PM, Paul Kienzle wrote: > > Instead of creating the ctypes struct I just used numpy to create a > scalar > of the correct structure. I wrapped it in a class so that I could > reference > the fields directly as e.g., instance.A. I used a factory function > for generating the class from the structure definition since I will > need > to wrap several structures. > > I'm attaching cstruct.py and embedarray.c where I demonstrate this. > > I'm particularly pleased that I can assign a 4x4 array to instance.A > and it just works! > > The ctypes docs talk about possible alignment issues for the > structs on > some architectures. They say it the ctypes follows the conventions of > the compiler which created it. I haven't checked if this will be a > problem with the numpy solution you outline above. > How do you pass ndarrays of arbitrary size. I would like to have some C code like: extern void bar(double **data, int L, int M) { ... } The code needs to pass the array on to another library as a **double, and I cannot seem to get ctypes to pass a 2 dimensional array to the subroutine. Arrays if fixed size seem to work fine, and single dimensional *doubles also work fine, even when passing multidimensional numpy.ndarrays. But, how to pass an arbitrary (two-dimensional) numpy.ndarray as a **double? -Rob ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From ac1201 at gmail.com Sat Jan 19 22:17:21 2008 From: ac1201 at gmail.com (Andrew Charles) Date: Sun, 20 Jan 2008 14:17:21 +1100 Subject: [SciPy-user] f2py support for fortran 90 derived types Message-ID: Is anyone aware of the current status of the work Jeffrey Hagelberg was doing to extend f2py to handle fortran 90 derived types? ------------------------- Andrew Charles Centre for Australian Weather and Climate, Australian Bureau of Meteorology. Condensed Matter Theory Group, RMIT University. From matthieu.brucher at gmail.com Sun Jan 20 04:01:12 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 20 Jan 2008 10:01:12 +0100 Subject: [SciPy-user] numpy array in ctype struct In-Reply-To: <3F8F685B-DFA0-4771-AEE9-A34C051B940E@tamu.edu> References: <20080115070726.A460060@jazz.ncnr.nist.gov> <478CCA3A.5060505@enthought.com> <20080115130429.F462016@jazz.ncnr.nist.gov> <3F8F685B-DFA0-4771-AEE9-A34C051B940E@tamu.edu> Message-ID: > > How do you pass ndarrays of arbitrary size. I would like to have > some C code like: > > extern void bar(double **data, int L, int M) { > ... > } > > The code needs to pass the array on to another library as a **double, > and I cannot seem to get ctypes to pass a 2 dimensional array to the > subroutine. Arrays if fixed size seem to work fine, and single > dimensional *doubles also work fine, even when passing > multidimensional numpy.ndarrays. > > But, how to pass an arbitrary (two-dimensional) numpy.ndarray as a > **double? This function signature suppose that you will access an element by data[i][j] which is not the way Numpy works. You can create a wrapper function that will allocate a double** 1D array pointing to the adequate *double (start of a line), pass it to your function and then deallocate the array when returning from the function. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From hetland at tamu.edu Sun Jan 20 18:03:47 2008 From: hetland at tamu.edu (Rob Hetland) Date: Mon, 21 Jan 2008 00:03:47 +0100 Subject: [SciPy-user] numpy array in ctype struct In-Reply-To: References: <20080115070726.A460060@jazz.ncnr.nist.gov> <478CCA3A.5060505@enthought.com> <20080115130429.F462016@jazz.ncnr.nist.gov> <3F8F685B-DFA0-4771-AEE9-A34C051B940E@tamu.edu> Message-ID: <7BCBE5DB-A1AC-4D9B-987E-275405986621@tamu.edu> On Jan 20, 2008, at 10:01 AM, Matthieu Brucher wrote: > This function signature suppose that you will access an element by > data[i][j] which is not the way Numpy works. This seems to be the way the example attached by Paul Kienzle works, albeit for fixed size arrays. Would there be a way to dynamically set the size for the fixed array, based on other input integers? > You can create a wrapper function that will allocate a double** 1D > array pointing to the adequate *double (start of a line), pass it > to your function and then deallocate the array when returning from > the function. Can you show an example? I tried to do something similar, but failed (due to poor coding skills..). Also, is it better to create the needed list of pointers on the python side, or the C side? I've had this sort of problem a few times, and I can't seem to find a general solution anywhere. Any advice would help, -Rob ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From yennifersantiago at gmail.com Sun Jan 20 18:17:03 2008 From: yennifersantiago at gmail.com (Yennifer Santiago) Date: Sun, 20 Jan 2008 19:17:03 -0400 Subject: [SciPy-user] Install SciPy Message-ID: <41bc705b0801201517k1d729ff6ue92ca8c613ee70e8@mail.gmail.com> Hello... I have been trying to install Scipy 0.6.0 but it gives an error when try to install the packages LAPACK an ATLAS. I have been following the steps in the file INSTALL.txt of SciPy. These are the errors that appear when I execute the following comand (having already fulfilled the steps previous): In Lapack: carolina at carolinapc:~/lapack/lapack-3.1.1$ make lapacklib Makefile:7: make.inc: No existe el fichero ? directorio make: *** No hay ninguna regla para construir el objetivo `make.inc'. Alto. in Atlas: carolina at carolinapc:~/atlas3.8.0_Linux_Core2Duo64SSE3_2/ATLAS3.8.0$ make make: *** No se especific? ning?n objetivo y no se encontr? ning?n makefile. Alto. Yennifer From cohen at slac.stanford.edu Sun Jan 20 21:25:25 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Sun, 20 Jan 2008 18:25:25 -0800 Subject: [SciPy-user] Install SciPy In-Reply-To: <41bc705b0801201517k1d729ff6ue92ca8c613ee70e8@mail.gmail.com> References: <41bc705b0801201517k1d729ff6ue92ca8c613ee70e8@mail.gmail.com> Message-ID: <47940295.1060703@slac.stanford.edu> hi Yennifer, as the error states, it seems that you are trying to build from a directory which does not have any Makefile file. Could you ls in these dirs to show us what they contain? Besides, I would strongly suggest you to pick up David's libraries if you can : http://download.opensuse.org/repositories/home:/ashigabou/ HTH, Johann Yennifer Santiago wrote: > Hello... > I have been trying to install Scipy 0.6.0 but it gives an error when > try to install the packages LAPACK an ATLAS. I have been following the > steps in the file INSTALL.txt of SciPy. > > These are the errors that appear when I execute the following comand > (having already fulfilled the steps previous): > > In Lapack: > > carolina at carolinapc:~/lapack/lapack-3.1.1$ make lapacklib > Makefile:7: make.inc: No existe el fichero ? directorio > make: *** No hay ninguna regla para construir el objetivo `make.inc'. Alto. > > in Atlas: > > carolina at carolinapc:~/atlas3.8.0_Linux_Core2Duo64SSE3_2/ATLAS3.8.0$ make > make: *** No se especific? ning?n objetivo y no se encontr? ning?n > makefile. Alto. > > Yennifer > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From cohen at slac.stanford.edu Sun Jan 20 21:34:26 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Sun, 20 Jan 2008 18:34:26 -0800 Subject: [SciPy-user] extracting an array with only non repeated entries Message-ID: <479404B2.9000702@slac.stanford.edu> hello, I have an array with integers, some of them repeating several times, and I would like to get the array of all the integers present in the inital array, but without repetition.... I started looking at the doc but haven't found such a functionality yet. thanks in advance, Johann From dwf at cs.toronto.edu Sun Jan 20 21:45:22 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sun, 20 Jan 2008 21:45:22 -0500 Subject: [SciPy-user] extracting an array with only non repeated entries In-Reply-To: <479404B2.9000702@slac.stanford.edu> References: <479404B2.9000702@slac.stanford.edu> Message-ID: On 20-Jan-08, at 9:34 PM, Johann Cohen-Tanugi wrote: > hello, > I have an array with integers, some of them repeating several times, > and > I would like to get the array of all the integers present in the > inital > array, but without repetition.... I started looking at the doc but > haven't found such a functionality yet. You're looking for the function numpy.unique1d . Regards, David From cohen at slac.stanford.edu Sun Jan 20 22:01:22 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Sun, 20 Jan 2008 19:01:22 -0800 Subject: [SciPy-user] extracting an array with only non repeated entries In-Reply-To: References: <479404B2.9000702@slac.stanford.edu> Message-ID: <47940B02.30407@slac.stanford.edu> fantastic! thx so much, Johann David Warde-Farley wrote: > On 20-Jan-08, at 9:34 PM, Johann Cohen-Tanugi wrote: > > >> hello, >> I have an array with integers, some of them repeating several times, >> and >> I would like to get the array of all the integers present in the >> inital >> array, but without repetition.... I started looking at the doc but >> haven't found such a functionality yet. >> > > You're looking for the function numpy.unique1d . > > Regards, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From david at ar.media.kyoto-u.ac.jp Sun Jan 20 22:25:00 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 21 Jan 2008 12:25:00 +0900 Subject: [SciPy-user] Install SciPy In-Reply-To: <41bc705b0801201517k1d729ff6ue92ca8c613ee70e8@mail.gmail.com> References: <41bc705b0801201517k1d729ff6ue92ca8c613ee70e8@mail.gmail.com> Message-ID: <4794108C.1040501@ar.media.kyoto-u.ac.jp> Yennifer Santiago wrote: > Hello... > I have been trying to install Scipy 0.6.0 but it gives an error when > try to install the packages LAPACK an ATLAS. I have been following the > steps in the file INSTALL.txt of SciPy. > > These are the errors that appear when I execute the following comand > (having already fulfilled the steps previous): > > In Lapack: > > carolina at carolinapc:~/lapack/lapack-3.1.1$ make lapacklib > Makefile:7: make.inc: No existe el fichero ? directorio > make: *** No hay ninguna regla para construir el objetivo `make.inc'. Alto. > > in Atlas: > > carolina at carolinapc:~/atlas3.8.0_Linux_Core2Duo64SSE3_2/ATLAS3.8.0$ make > make: *** No se especific? ning?n objetivo y no se encontr? ning?n > makefile. Alto. > > Installing lapack, and even more ATLAS are much more involving than just typing make, unfortunately. If you are not familiar with building fortran code, I strongly recommend you to install those packages through your distribution (assuming you are on Linux). Which distribution are you on ? binary packages for blas/lapack/atlas are (at least) available for debian/ubuntu/FC (6 and 7) and open suse. cheers, David From matthieu.brucher at gmail.com Mon Jan 21 01:47:37 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 21 Jan 2008 07:47:37 +0100 Subject: [SciPy-user] numpy array in ctype struct In-Reply-To: <7BCBE5DB-A1AC-4D9B-987E-275405986621@tamu.edu> References: <20080115070726.A460060@jazz.ncnr.nist.gov> <478CCA3A.5060505@enthought.com> <20080115130429.F462016@jazz.ncnr.nist.gov> <3F8F685B-DFA0-4771-AEE9-A34C051B940E@tamu.edu> <7BCBE5DB-A1AC-4D9B-987E-275405986621@tamu.edu> Message-ID: 2008/1/21, Rob Hetland : > > > On Jan 20, 2008, at 10:01 AM, Matthieu Brucher wrote: > > > This function signature suppose that you will access an element by > > data[i][j] which is not the way Numpy works. > > This seems to be the way the example attached by Paul Kienzle works, > albeit for fixed size arrays. Would there be a way to dynamically > set the size for the fixed array, based on other input integers? No, in fact Paul's example uses [4][4] and this is coded by the compiler like an int*. > You can create a wrapper function that will allocate a double** 1D > > array pointing to the adequate *double (start of a line), pass it > > to your function and then deallocate the array when returning from > > the function. > > > Can you show an example? I tried to do something similar, but failed > (due to poor coding skills..). Also, is it better to create the > needed list of pointers on the python side, or the C side? > Definitely on the C side. I'll try to give you an example today (but no promises) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From strawman at astraw.com Mon Jan 21 03:10:41 2008 From: strawman at astraw.com (Andrew Straw) Date: Mon, 21 Jan 2008 00:10:41 -0800 Subject: [SciPy-user] numpy array in ctype struct In-Reply-To: <7BCBE5DB-A1AC-4D9B-987E-275405986621@tamu.edu> References: <20080115070726.A460060@jazz.ncnr.nist.gov> <478CCA3A.5060505@enthought.com> <20080115130429.F462016@jazz.ncnr.nist.gov> <3F8F685B-DFA0-4771-AEE9-A34C051B940E@tamu.edu> <7BCBE5DB-A1AC-4D9B-987E-275405986621@tamu.edu> Message-ID: <47945381.9000700@astraw.com> Hi Rob, I am far from being a weave expert, having used it once in my life, but I think you may find some macros of interest if you examine the output C file. Examining these files was how I derived the following function: def increment_fast(final_array_shape, idxa,idxb,idxc): counts = numpy.zeros(final_array_shape, dtype=numpy.uint64) assert len(idxa.shape)==1 assert len(idxa)==len(idxb) assert len(idxa)==len(idxc) code = r""" for (int i=0; i On Jan 20, 2008, at 10:01 AM, Matthieu Brucher wrote: > > >> This function signature suppose that you will access an element by >> data[i][j] which is not the way Numpy works. >> > > This seems to be the way the example attached by Paul Kienzle works, > albeit for fixed size arrays. Would there be a way to dynamically > set the size for the fixed array, based on other input integers? > > > >> You can create a wrapper function that will allocate a double** 1D >> array pointing to the adequate *double (start of a line), pass it >> to your function and then deallocate the array when returning from >> the function. >> > > > Can you show an example? I tried to do something similar, but failed > (due to poor coding skills..). Also, is it better to create the > needed list of pointers on the python side, or the C side? > > I've had this sort of problem a few times, and I can't seem to find a > general solution anywhere. Any advice would help, > > -Rob > > ---- > Rob Hetland, Associate Professor > Dept. of Oceanography, Texas A&M University > http://pong.tamu.edu/~rob > phone: 979-458-0096, fax: 979-845-6331 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From hetland at tamu.edu Mon Jan 21 03:32:56 2008 From: hetland at tamu.edu (Rob Hetland) Date: Mon, 21 Jan 2008 09:32:56 +0100 Subject: [SciPy-user] Memory leak in delaunay interpolator In-Reply-To: <47910380.5060608@gmail.com> References: <478EE262.3020305@gmail.com> <478EF15B.9050806@gmail.com> <478F9CE6.2050904@gmail.com> <478FC3D6.4000609@gmail.com> <47910380.5060608@gmail.com> Message-ID: On Jan 18, 2008, at 8:52 PM, Robert Kern wrote: > Its memory usage remains constant on my OS X machine. Memory increased linearly on my machine (os x 10.4), as well as on a relatively new install of ubuntu. On the ubuntu machine, the leak was large enough to cause a crash eventually. I didn't have the patience to wait for a crash on my Mac.. I wish I had more information to give. I'm pretty sure the memory leak is really there on my machine (and the ubuntu machine), but I guess it is not reproducible across different platforms. Perhaps the problem will simply go away when I upgrade to 10.5. Other potentially relevant info: ~$ g++ --version i686-apple-darwin8-g++-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build 5367) >>> numpy.__version__ '1.0.5.dev4722' ~$ python --version Python 2.5 -Rob ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From timmichelsen at gmx-topmail.de Mon Jan 21 04:20:12 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Mon, 21 Jan 2008 09:20:12 +0000 (UTC) Subject: [SciPy-user] timeseries import error References: <200801181553.13213.w.richert@gmx.net> <91cf711d0801180909q7babc52dk9cd19529f888e876@mail.gmail.com> Message-ID: Hi! > So you can try to 1. install the numpy masked array branch. 2. Install a binary scipy release3. Apply the attached patch to timeseries, converting import If you happen to work on Ubuntu 7.10 gutsy I can send you checkinstall created *.debs which were created using the code from the Scipy sandbox for both: maskedarray and timeseries. Please drop me a PM. @Pierre/Matt: Is there anything a non-C-Programmer can help with getting the package ready for a Scikit? Kind regards, Timmie From denisbz at t-online.de Mon Jan 21 08:35:25 2008 From: denisbz at t-online.de (denis bzowy) Date: Mon, 21 Jan 2008 14:35:25 +0100 Subject: [SciPy-user] scipy with mac 10.4 vecLib, undefined dso_handle ? Message-ID: <47949F9D.8060200@t-online.de> An HTML attachment was scrubbed... URL: From saintmlx at apstat.com Mon Jan 21 12:07:42 2008 From: saintmlx at apstat.com (Xavier Saint-Mleux) Date: Mon, 21 Jan 2008 12:07:42 -0500 Subject: [SciPy-user] [ANN] pysamplerate: available in scikits, as samplerate In-Reply-To: <47919317.2030408@ar.media.kyoto-u.ac.jp> References: <4790C9EC.3000102@ar.media.kyoto-u.ac.jp> <47911DD3.4040608@apstat.com> <479187B4.8080407@ar.media.kyoto-u.ac.jp> <47919317.2030408@ar.media.kyoto-u.ac.jp> Message-ID: <4794D15E.8040501@apstat.com> It works just fine, thanks a lot! Xavier David Cournapeau wrote: > David Cournapeau wrote: > >> Xavier Saint-Mleux wrote: >> >> >>> Hi David, >>> >>> I just tried to install your samplerate package and it seems to have a >>> bootstrap problem: setup.py tries to import >>> scikits/samplerate/pysamplerate.py before actually building it. >>> >>> e.g.: >>> Traceback (most recent call last): >>> File "setup.py", line 26, in ? >>> from scikits.samplerate.info import _C_SRC_MAJ_VERSION as >>> SAMPLERATE_MAJ_VERSION >>> File "[...]/scikits/trunk/samplerate/scikits/samplerate/__init__.py", >>> line 6, in ? >>> from pysamplerate import resample, converter_format >>> ImportError: No module named pysamplerate >>> >>> Am I missing something? >>> >>> >>> > Should be solved, now. > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From lorenzo.isella at gmail.com Mon Jan 21 12:46:42 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Mon, 21 Jan 2008 18:46:42 +0100 Subject: [SciPy-user] Again on Sorting Objects Message-ID: Dear All, Again I am struggling with objects I have to re-arrange. Say you have the array (already sorted): [2 3 4 4 4 4 5 6 6 8 8 8 10 10] I now would like to find (definitely in a more efficient way than I am doing now): 1) how many different elements I have in the array (in this case 7: 2,3,4,5,6,8,10). 2)where each block of identical, repeated elements starts and finishes. I.e. I would like to break up the array this way: [2 | 3 | 4 4 4 4 | 5 | 6 6 | 8 8 8 | 10 10 |] thus getting a list of "right boundaries" of each block of repeated elements. I have been playing with sort and argsort, but with no success. Many thanks Lorenzo From cohen at slac.stanford.edu Mon Jan 21 12:47:34 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Mon, 21 Jan 2008 09:47:34 -0800 Subject: [SciPy-user] Again on Sorting Objects In-Reply-To: References: Message-ID: <4794DAB6.2050406@slac.stanford.edu> Lorenzo Isella wrote: > Dear All, > Again I am struggling with objects I have to re-arrange. > Say you have the array (already sorted): > > [2 3 4 4 4 4 5 6 6 8 8 8 10 10] > > I now would like to find (definitely in a more efficient way than I am > doing now): > 1) how many different elements I have in the array (in this case 7: > 2,3,4,5,6,8,10). > I got an answer about a similar questions yesterday :) : try numpy.unique1d(your_array) > 2)where each block of identical, repeated elements starts and finishes. > I.e. I would like to break up the array this way: > > [2 | 3 | 4 4 4 4 | 5 | 6 6 | 8 8 8 | 10 10 |] > that I don't have the answer straight away.... hth, J. > thus getting a list of "right boundaries" of each block of repeated elements. > I have been playing with sort and argsort, but with no success. > Many thanks > > Lorenzo > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Mon Jan 21 12:57:42 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 Jan 2008 11:57:42 -0600 Subject: [SciPy-user] Again on Sorting Objects In-Reply-To: References: Message-ID: <4794DD16.6060408@gmail.com> Lorenzo Isella wrote: > Dear All, > Again I am struggling with objects I have to re-arrange. > Say you have the array (already sorted): > > [2 3 4 4 4 4 5 6 6 8 8 8 10 10] > > I now would like to find (definitely in a more efficient way than I am > doing now): > 1) how many different elements I have in the array (in this case 7: > 2,3,4,5,6,8,10). > 2)where each block of identical, repeated elements starts and finishes. > I.e. I would like to break up the array this way: > > [2 | 3 | 4 4 4 4 | 5 | 6 6 | 8 8 8 | 10 10 |] > > thus getting a list of "right boundaries" of each block of repeated elements. > I have been playing with sort and argsort, but with no success. In [15]: from numpy import * In [16]: a = array([2, 3, 4, 4, 4, 4, 5, 6, 6, 8, 8, 8, 10, 10]) In [17]: u = unique1d(a) In [18]: u Out[18]: array([ 2, 3, 4, 5, 6, 8, 10]) In [21]: i = a.searchsorted(u, side='left') In [22]: i Out[22]: array([ 0, 1, 2, 6, 7, 9, 12]) In [23]: j = a.searchsorted(u, side='right') In [24]: j Out[24]: array([ 1, 2, 6, 7, 9, 12, 14]) In [25]: for ij in zip(i, j): ....: print a[ij[0]:ij[1]] ....: ....: [2] [3] [4 4 4 4] [5] [6 6] [8 8 8] [10 10] -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Mon Jan 21 14:02:59 2008 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 21 Jan 2008 14:02:59 -0500 Subject: [SciPy-user] Again on Sorting Objects In-Reply-To: <4794DD16.6060408@gmail.com> References: <4794DD16.6060408@gmail.com> Message-ID: So ``a`` is already sorted? Use groupby? Cheers, Alan Isaac >>> from itertools import groupby >>> for t,v in groupby(a): print list(v) ... [2] [3] [4, 4, 4, 4] [5] [6, 6] [8, 8, 8] [10, 10] From oliphant at enthought.com Mon Jan 21 14:17:36 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Mon, 21 Jan 2008 13:17:36 -0600 Subject: [SciPy-user] numpy array in ctype struct In-Reply-To: <3F8F685B-DFA0-4771-AEE9-A34C051B940E@tamu.edu> References: <20080115070726.A460060@jazz.ncnr.nist.gov> <478CCA3A.5060505@enthought.com> <20080115130429.F462016@jazz.ncnr.nist.gov> <3F8F685B-DFA0-4771-AEE9-A34C051B940E@tamu.edu> Message-ID: <4794EFD0.3080902@enthought.com> Rob Hetland wrote: > On Jan 15, 2008, at 7:04 PM, Paul Kienzle wrote: > >> Instead of creating the ctypes struct I just used numpy to create a >> scalar >> of the correct structure. I wrapped it in a class so that I could >> reference >> the fields directly as e.g., instance.A. I used a factory function >> for generating the class from the structure definition since I will >> need >> to wrap several structures. >> >> I'm attaching cstruct.py and embedarray.c where I demonstrate this. >> >> I'm particularly pleased that I can assign a 4x4 array to instance.A >> and it just works! >> >> The ctypes docs talk about possible alignment issues for the >> structs on >> some architectures. They say it the ctypes follows the conventions of >> the compiler which created it. I haven't checked if this will be a >> problem with the numpy solution you outline above. >> > > >> >> > > How do you pass ndarrays of arbitrary size. I would like to have > some C code like: > > extern void bar(double **data, int L, int M) { > ... > } > > The code needs to pass the array on to another library as a **double, > and I cannot seem to get ctypes to pass a 2 dimensional array to the > subroutine. Arrays if fixed size seem to work fine, and single > dimensional *doubles also work fine, even when passing > multidimensional numpy.ndarrays. > > But, how to pass an arbitrary (two-dimensional) numpy.ndarray as a > **double? > You might find PyArray_AsCArray useful. It takes a NumPy array and produces the needed pointers to allow accessing the array as a C-style array. It works for 1-, 2-, and 3-d arrays. There are old Numeric API's that do similar things specifically for 2-d and 1-d arrays. You need to make sure and use PyArray_Free to clean up the pointers created to simulate the pointer-to-pointers approach. Best regards, -Travis O. > -Rob > > > ---- > Rob Hetland, Associate Professor > Dept. of Oceanography, Texas A&M University > http://pong.tamu.edu/~rob > phone: 979-458-0096, fax: 979-845-6331 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From ramercer at gmail.com Mon Jan 21 14:40:13 2008 From: ramercer at gmail.com (Adam Mercer) Date: Mon, 21 Jan 2008 14:40:13 -0500 Subject: [SciPy-user] Strange fortran (g95) build error on Mac OS X - not finding fortran compiler Message-ID: <799406d60801211140k2961b966jf93095039cf94468@mail.gmail.com> Hi I'm running into a strange problem trying to build scipy-0.6.0 using the g95-0.90 fortran compiler (from http://www.g95.org) on Mac OS X. The build fails with: building 'scipy.interpolate.dfitpack' extension error: extension 'scipy.interpolate.dfitpack' has Fortran sources but no Fortran compiler found which is strange as earlier in the build process the fortran compiler is found and fortran code is compiled without issue: building 'specfun' library compiling Fortran sources Fortran f77 compiler: /opt/local/bin/g95 -Wall -ffixed-form -fno-second-underscore -fPIC -O2 -funroll-loops Fortran f90 compiler: /opt/local/bin/g95 -Wall -fno-second-underscore -fPIC -O2 -funroll-loops Fortran fix compiler: /opt/local/bin/g95 -Wall -ffixed-form -fno-second-underscore -Wall -fno-second-underscore -fPIC -O2 -funroll-loops creating build/temp.macosx-10.3-i386-2.5/scipy/special/specfun compile options: '-c' g95:f77: scipy/special/specfun/specfun.f I'm building scipy using the following command $ python setup.py config_fc --fcompiler gnu95 --f77exec /opt/local/bin/g95 --f90exec /opt/local/bin/g95 build If I use the gfortran compiler from gcc42 instead then the build proceeds successfully, any idea why this problem is occurring with g95? Cheers Adam From cohen at slac.stanford.edu Mon Jan 21 15:12:35 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Mon, 21 Jan 2008 12:12:35 -0800 Subject: [SciPy-user] Again on Sorting Objects In-Reply-To: References: <4794DD16.6060408@gmail.com> Message-ID: <4794FCB3.8050708@slac.stanford.edu> great little tools. I realized following this thread that it would be extremely handy for me to have a fast easy way to do this groupby but on 2D arrays. In other words, starting with array([[ 1, 12], [ 1, 24], [ 2, 3], [ 2, 60], [ 2, 100], [ 3, 75]]) I would love to end with : array([[ 1, 12], [ 1, 24]]) array([[ 2, 3], [ 2, 60], [ 2, 100]]) array([[ 3, 75]]) or sthg like that. Anyway to do that quickly, based on the previous mentioned tools? best, Johann Alan G Isaac wrote: > So ``a`` is already sorted? > Use groupby? > > Cheers, > Alan Isaac > > >>>> from itertools import groupby >>>> for t,v in groupby(a): print list(v) >>>> > ... > [2] > [3] > [4, 4, 4, 4] > [5] > [6, 6] > [8, 8, 8] > [10, 10] > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From aisaac at american.edu Mon Jan 21 15:29:55 2008 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 21 Jan 2008 15:29:55 -0500 Subject: [SciPy-user] Again on Sorting Objects In-Reply-To: <4794FCB3.8050708@slac.stanford.edu> References: <4794DD16.6060408@gmail.com> <4794FCB3.8050708@slac.stanford.edu> Message-ID: On Mon, 21 Jan 2008, Johann Cohen-Tanugi apparently wrote: > great little tools. I realized following this thread that it would be > extremely handy for me to have a fast easy way to do this > groupby but on 2D arrays. Unfortunately you cannot quite do this, I think, but you can do: for k,v in groupby(a,lambda x: x[0]): print N.array(list(v)) hth, Alan Isaac From mattknox_ca at hotmail.com Mon Jan 21 15:37:16 2008 From: mattknox_ca at hotmail.com (Matt Knox) Date: Mon, 21 Jan 2008 20:37:16 +0000 (UTC) Subject: [SciPy-user] timeseries import error References: <200801181553.13213.w.richert@gmx.net> <91cf711d0801180909q7babc52dk9cd19529f888e876@mail.gmail.com> Message-ID: > @Pierre/Matt: Is there anything a non-C-Programmer can help with getting the > package ready for a Scikit? Porting the package to a scikit is going to be fairly trivial. It is really just a matter of waiting on the maskedarray numpy merging. Although since the maskedarray branch has been created, I suppose we can start this work now with the disclaimer that the maskedarray branch is required for the timeseries module. The only changes we will have to make to the code is to change a few import statements, and change the setup.py script to use setuptools and install the timeseries module as a scikit namespace package. I can take care of those items. If you are looking to help in general... here is a rough list of some to-do items for the timeseries module which do not require any knowledge of C programming: - verify unit tests work using the nose framework - add plotting support for hourly, minutely, and secondly frequencies - clean up reporting code and optimize for performance - create appropriate *tolist* methods for TimeSeries and TimeSeriesRecords classes - improve robustness of moving_funcs for multidimensional arrays - add section to documentation about working with time series and databases plus the ever present... - add more unit tests - improve documentation If you want to tackle any of those items and need some additional clarity, drop me a line. Help is always appreciated! - Matt From cohen at slac.stanford.edu Mon Jan 21 20:06:15 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Mon, 21 Jan 2008 17:06:15 -0800 Subject: [SciPy-user] scipy.io.read_array and numpy.fromfile Message-ID: <47954187.5060708@slac.stanford.edu> hi there, I am correct to understand that scipy.io.read_array has been deprecated and that numpy.fromfile should now be used instead? In that case, it seems that fromfile does nat hae a 'comment' attribute, which is unfortunate. Can someone infirm/confirm? thanks in advance, Johann From oliphant at enthought.com Mon Jan 21 21:31:39 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Mon, 21 Jan 2008 20:31:39 -0600 Subject: [SciPy-user] scipy.io.read_array and numpy.fromfile In-Reply-To: <47954187.5060708@slac.stanford.edu> References: <47954187.5060708@slac.stanford.edu> Message-ID: <4795558B.4060406@enthought.com> Johann Cohen-Tanugi wrote: > hi there, > I am correct to understand that scipy.io.read_array has been deprecated > and that numpy.fromfile should now be used instead? In that case, it > seems that fromfile does nat hae a 'comment' > attribute, which is unfortunate. Can someone infirm/confirm? > No, numpy.fromfile is "low-level" numpy.loadtxt will be the long-term replacement for read_array. -Travis O. From fperez.net at gmail.com Tue Jan 22 01:49:04 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 21 Jan 2008 23:49:04 -0700 Subject: [SciPy-user] Sage/Scipy Days 8 reminder: Feb 29-March 4. Message-ID: Hi all, Just a quick reminder for all about the upcoming Sage/Scipy Days 8 at Enthought collaborative meeting: http://wiki.sagemath.org/days8 Email me directly (Fernando.Perez at Colorado.edu) if you plan on coming, so we can have a proper count and plan accordingly. Cheers, f From hetland at tamu.edu Tue Jan 22 02:17:51 2008 From: hetland at tamu.edu (Rob Hetland) Date: Tue, 22 Jan 2008 08:17:51 +0100 Subject: [SciPy-user] numpy array in ctype struct In-Reply-To: <4794EFD0.3080902@enthought.com> References: <20080115070726.A460060@jazz.ncnr.nist.gov> <478CCA3A.5060505@enthought.com> <20080115130429.F462016@jazz.ncnr.nist.gov> <3F8F685B-DFA0-4771-AEE9-A34C051B940E@tamu.edu> <4794EFD0.3080902@enthought.com> Message-ID: On Jan 21, 2008, at 8:17 PM, Travis E. Oliphant wrote: > You might find PyArray_AsCArray useful. Travis- Indeed, that was what I needed to get it to work right. For posterity, here is what I did. I know I need a 2D array, so much of the complexity in PyArray_AsCArray could be taken out. I will also assume a contiguous array, so that my function looks something like this: void* function_wrap(double *pts1, int nPts, int dim) { int i; double **pts; pts = (double **)malloc(nPts * sizeof(double *)); for (i=0; i References: Message-ID: On a side note, is there an efficient way to compute the cooccurence matrix of an array ? Matthieu 2008/1/18, Matthieu Brucher : > > Thanks for the tips, but ITK seems only to support the usual moments, > there are no traces of the more complex invariants. > And OpenCV seems to be 2D only (like PIL). > > Other ideas ? > > Matthieu > > 2008/1/18, jelle feringa : > > > > Sure, for simple straightforward image processing you can use PIL. > > If you need to know apply advanced image analysis, WrapITK is your > > friend. > > > > -jelle > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Jan 22 12:24:56 2008 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 22 Jan 2008 19:24:56 +0200 Subject: [SciPy-user] 2D and 3D moments of an image (Hu, Zernike, ...) In-Reply-To: References: Message-ID: <20080122172456.GC29954@mentat.za.net> Hi Matthieu On Tue, Jan 22, 2008 at 09:00:40AM +0100, Matthieu Brucher wrote: > On a side note, is there an efficient way to compute the cooccurence matrix of > an array ? You can find a ctypes implementation at http://mentat.za.net/source/greycomatrix.tar.bz2 This is a translated version of some C++ code I wrote a few years ago, so please check the results thoroughly (patches are welcome). Another person you may want to chat to is Zachary Pincus: http://public.kitware.com/pipermail/insight-users/2004-June/009190.html who also hangs around on this list. Regards St?fan From zachary.pincus at yale.edu Tue Jan 22 12:52:51 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 22 Jan 2008 12:52:51 -0500 Subject: [SciPy-user] 2D and 3D moments of an image (Hu, Zernike, ...) In-Reply-To: <20080122172456.GC29954@mentat.za.net> References: <20080122172456.GC29954@mentat.za.net> Message-ID: <4B197AE6-8FD0-4E32-87F5-43F7D322DD54@yale.edu> Hi all, My Grey-level coocurrence matrix code is indeed in ITK, and probably python-accessible via WrapITK. I can give some pointers to this if desired, but that's a pretty "heavy-weight" solution. I think that having versions of these basic image feature calculations in a scikit or something would be very valuable. In the past, I've implemented Zernike moments and co-occurence matrix statistics in the context of ITK's C++ framework, and it really wasn't very much work. I would almost suggest that writing them from scratch as numpy operations might be easier than trying to deal with strange C-code. (And not slower -- I think that both can easily be cast in terms of ufuncs and whole-matrix operations, etc.) I'm happy to talk at more length about the possibility of cobbling together such a scikit, if anyone's interested. Zach On Jan 22, 2008, at 12:24 PM, Stefan van der Walt wrote: > Hi Matthieu > > On Tue, Jan 22, 2008 at 09:00:40AM +0100, Matthieu Brucher wrote: >> On a side note, is there an efficient way to compute the >> cooccurence matrix of >> an array ? > > You can find a ctypes implementation at > > http://mentat.za.net/source/greycomatrix.tar.bz2 > > This is a translated version of some C++ code I wrote a few years ago, > so please check the results thoroughly (patches are welcome). > > Another person you may want to chat to is Zachary Pincus: > > http://public.kitware.com/pipermail/insight-users/2004-June/ > 009190.html > > who also hangs around on this list. > > Regards > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From f.braennstroem at gmx.de Tue Jan 22 15:58:40 2008 From: f.braennstroem at gmx.de (Fabian Braennstroem) Date: Tue, 22 Jan 2008 20:58:40 +0000 Subject: [SciPy-user] compare two csv files Message-ID: Hi, I would like to compare two csv file; actually two columns from two csv files. I would use something like: def read_test(): start = time.clock() reader = csv.reader( file('data.txt') ) data = [ map(float, row) for row in reader ] data = array(data, dtype = float) To get my data into an array. Does anyone have an idea, how to compare the two columns? Would be nice! Fabian From oliphant at enthought.com Tue Jan 22 15:04:02 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 22 Jan 2008 14:04:02 -0600 Subject: [SciPy-user] numpy array in ctype struct In-Reply-To: References: <20080115070726.A460060@jazz.ncnr.nist.gov> <478CCA3A.5060505@enthought.com> <20080115130429.F462016@jazz.ncnr.nist.gov> <3F8F685B-DFA0-4771-AEE9-A34C051B940E@tamu.edu> <4794EFD0.3080902@enthought.com> Message-ID: <47964C32.7030303@enthought.com> Rob Hetland wrote: > On Jan 21, 2008, at 8:17 PM, Travis E. Oliphant wrote: > > >> You might find PyArray_AsCArray useful. >> > > > Travis- > > Indeed, that was what I needed to get it to work right. For > posterity, here is what I did. I know I need a 2D array, so much of > the complexity in PyArray_AsCArray could be taken out. I will also > assume a contiguous array, so that my function looks something like > this: > > > void* function_wrap(double *pts1, int nPts, int dim) { > > int i; > double **pts; > > pts = (double **)malloc(nPts * sizeof(double *)); > for (i=0; i pts[i] = pts1 + i*dim; > } > > /// Call and return result from some function that needs **pts > > } > > > > Are there any pitfalls to this approach? It seems to work perfectly > for the test cases I have done. > Some minor ones: 1) you have to remember to free the malloc'd pointers. 2) It is *not* the same as a 2-d static C array to the compiler which you would find if you tried to pass it in to a subroutine that was (unfortunately) declared to need a specific size of 2-d array. -Travis O. From oliphant at enthought.com Tue Jan 22 15:10:41 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 22 Jan 2008 14:10:41 -0600 Subject: [SciPy-user] SciPy/NumPy Doc-day on Friday Jan 25th Message-ID: <47964DC1.7020404@enthought.com> It's time for another doc-day for SciPy/NumPy. We will convene on IRC during the day on Friday. I will try and spend some time in the afternoon/evening on Friday night (Central Standard Time), but will be logged on to IRC during the rest of the day. Come join us at irc.freenode.net (channel scipy). We may update the list of priorities which is still located on the SciPy Trac Wiki: http://projects.scipy.org/scipy/scipy/wiki/DocDays I look forward to "irc-eeing" as many of you as can participate on Friday-Saturday (depending on your timezone). -Travis O. From matthieu.brucher at gmail.com Tue Jan 22 15:14:05 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 22 Jan 2008 21:14:05 +0100 Subject: [SciPy-user] 2D and 3D moments of an image (Hu, Zernike, ...) In-Reply-To: <20080122172456.GC29954@mentat.za.net> References: <20080122172456.GC29954@mentat.za.net> Message-ID: Thank you, I'll try this tomorrow. I'd prefer using your code (if possible) than Zachary's as I don't have a compiled ITK on my box. Matthieu 2008/1/22, Stefan van der Walt : > > Hi Matthieu > > On Tue, Jan 22, 2008 at 09:00:40AM +0100, Matthieu Brucher wrote: > > On a side note, is there an efficient way to compute the cooccurence > matrix of > > an array ? > > You can find a ctypes implementation at > > http://mentat.za.net/source/greycomatrix.tar.bz2 > > This is a translated version of some C++ code I wrote a few years ago, > so please check the results thoroughly (patches are welcome). > > Another person you may want to chat to is Zachary Pincus: > > http://public.kitware.com/pipermail/insight-users/2004-June/009190.html > > who also hangs around on this list. > > Regards > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Jan 22 15:21:48 2008 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 22 Jan 2008 22:21:48 +0200 Subject: [SciPy-user] 2D and 3D moments of an image (Hu, Zernike, ...) In-Reply-To: <4B197AE6-8FD0-4E32-87F5-43F7D322DD54@yale.edu> References: <20080122172456.GC29954@mentat.za.net> <4B197AE6-8FD0-4E32-87F5-43F7D322DD54@yale.edu> Message-ID: <20080122202148.GA17231@mentat.za.net> Hi Zachary On Tue, Jan 22, 2008 at 12:52:51PM -0500, Zachary Pincus wrote: > wasn't very much work. I would almost suggest that writing them from > scratch as numpy operations might be easier than trying to deal with > strange C-code. (And not slower -- I think that both can easily be > cast in terms of ufuncs and whole-matrix operations, etc.) I'd be very interested in such a solution, and I think it may prove somewhat challenging (calculating the grey-level co-occurrence matrix effectively, using only numpy operations). > I'm happy to talk at more length about the possibility of cobbling > together such a scikit, if anyone's interested. I am all for the idea. Ndimage was written before numpy was on the scene, and now we can replace a lot of its functionality using Python code (that would execute just as fast!). I recall that, when the sandbox still existed, there was some colour-space conversion code that we could incorporate. I also have some computer vision, linear-filtering, non-linear operator and image restoration code, which complements the current ndimage functionality. Looking forward to hearing more on the topic. Regards St?fan From matthieu.brucher at gmail.com Tue Jan 22 15:42:13 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 22 Jan 2008 21:42:13 +0100 Subject: [SciPy-user] 2D and 3D moments of an image (Hu, Zernike, ...) In-Reply-To: <20080122202148.GA17231@mentat.za.net> References: <20080122172456.GC29954@mentat.za.net> <4B197AE6-8FD0-4E32-87F5-43F7D322DD54@yale.edu> <20080122202148.GA17231@mentat.za.net> Message-ID: > > > I'm happy to talk at more length about the possibility of cobbling > > together such a scikit, if anyone's interested. > > I am all for the idea. Ndimage was written before numpy was on the > scene, and now we can replace a lot of its functionality using Python > code (that would execute just as fast!). > It would be great, I'd like to see this, cooccurrence coefficients can be interesting in manifold learning :) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzo.isella at gmail.com Tue Jan 22 16:18:19 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Tue, 22 Jan 2008 22:18:19 +0100 Subject: [SciPy-user] Again on Sorting Objects In-Reply-To: References: Message-ID: <47965D9B.2080603@gmail.com> I am glad the tread was judged interesting and I hope I did not ask questions whose answer had already been posted many times (but my online searches were not successful). Another closely related question: You have the arrays A=[1 12 2 3 12 7 9 9 7] and B=[5 8 99 12 1 3 7 4 2] This is what I am after (A stands for some recorded events and I want to get rid of those which are too rare) 1)get rid of the elements of A which do not appear at least twice A_new=[12 12 7 7 9 9] 2)get rid of the corresponding elements of B i.e. B_new=[8 1 7 4] are these also almost one-liners in Scipy/Numpy? I have to say I am impressed by how much one can achieve in a few lines. Should these little cute tools appear somewhere in a cookbook? Cheers Lorenzo From orest.kozyar at gmail.com Tue Jan 22 16:21:11 2008 From: orest.kozyar at gmail.com (Orest Kozyar) Date: Tue, 22 Jan 2008 13:21:11 -0800 (PST) Subject: [SciPy-user] OpenOpt - connecting ALGENCAN Message-ID: <5fa79e79-5662-4901-a378-11897aba002a@c4g2000hsg.googlegroups.com> I have been able to successfully compile ALGENCAN for Windows XP using Cygwin and create the python wrapper. The test Python program provided (algencanma.py) works well. However, I'm not sure what the next step to "connect" this to the OpenOpt framework is. What files do I need to copy where? Has anyone been able to do this successfully? I've looked around on both the ALGENCAN website and the OpenOpt website and can't find documentation anywhere on this. Thanks! Orest From Consult at KandA-Ltd.com Tue Jan 22 17:18:00 2008 From: Consult at KandA-Ltd.com (Kenneth Kalan) Date: Tue, 22 Jan 2008 16:18:00 -0600 (CST) Subject: [SciPy-user] compile/installation problems Message-ID: <2748.76.224.124.210.1201040280.squirrel@mail.kxs.net> I'm trying to build a RHEL 5 x64 rpm of scipy. It keeps failing and the problem seems to be related to umfpack. I've built ufsparse-2.1.1-1.fc6.src.rpm build/src.linux-x86_64-2.4/scipy/linsolve/umfpack/_umfpack_wrap.c:6189: error: too few arguments to function '__builtin_object_size' build/src.linux-x86_64-2.4/scipy/linsolve/umfpack/_umfpack_wrap.c:6189: error: expected expression before ')' token build/src.linux-x86_64-2.4/scipy/linsolve/umfpack/_umfpack_wrap.c:6189: error: too few arguments to function '__builtin___memcpy_chk' build/src.linux-x86_64-2.4/scipy/linsolve/umfpack/_umfpack_wrap.c:6189: error: expected expression before ')' token build/src.linux-x86_64-2.4/scipy/linsolve/umfpack/_umfpack_wrap.c:6189: error: too few arguments to function '__memcpy_ichk' I don't see any reference to a version of umfpack and would like to know if the version in ufsparse above is the correct one, or do I need to specifically build another. Thanks, Ken From yennifersantiago at gmail.com Tue Jan 22 22:15:37 2008 From: yennifersantiago at gmail.com (Yennifer Santiago) Date: Tue, 22 Jan 2008 23:15:37 -0400 Subject: [SciPy-user] Install_SciPy Message-ID: <41bc705b0801221915g6c4ef7f3kc07e84d1a51f7de4@mail.gmail.com> Hello... I want to install Scipy 0.6.0, I already install the packages numpy, atlas, lapack an SciPy, but when I make the test, after intall SciPy: carolina at carolinapc:~$ import scipy carolina at carolinapc:~$ scipy.test(level=1) bash: error de sintaxis cerca de token no esperado `level=1' What is the problem?? Thanks From millman at berkeley.edu Tue Jan 22 22:17:11 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 22 Jan 2008 19:17:11 -0800 Subject: [SciPy-user] Install_SciPy In-Reply-To: <41bc705b0801221915g6c4ef7f3kc07e84d1a51f7de4@mail.gmail.com> References: <41bc705b0801221915g6c4ef7f3kc07e84d1a51f7de4@mail.gmail.com> Message-ID: You need to launch python first; you appear to be working at a bash shell. On Jan 22, 2008 7:15 PM, Yennifer Santiago wrote: > Hello... > > I want to install Scipy 0.6.0, I already install the packages numpy, > atlas, lapack an SciPy, but when I make the test, after intall SciPy: > > carolina at carolinapc:~$ import scipy > carolina at carolinapc:~$ scipy.test(level=1) > bash: error de sintaxis cerca de token no esperado `level=1' > > What is the problem?? > > Thanks > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From david.huard at gmail.com Tue Jan 22 22:53:23 2008 From: david.huard at gmail.com (David Huard) Date: Tue, 22 Jan 2008 22:53:23 -0500 Subject: [SciPy-user] timeseries import error In-Reply-To: <20080118232049.GE19790@mentat.za.net> References: <200801181553.13213.w.richert@gmx.net> <91cf711d0801180909q7babc52dk9cd19529f888e876@mail.gmail.com> <20080118232049.GE19790@mentat.za.net> Message-ID: <91cf711d0801221953w718020bfo30822df89fa16deb@mail.gmail.com> Hi Stefan, everything looks fine, except for the __init__ file, where import add_newdocs should be called before import ma. This is because ma docstrings append elements to numpy docstrings that are defined in add_newdocs. Thanks, David 2008/1/18, Stefan van der Walt : > > Hi David > > On Fri, Jan 18, 2008 at 12:09:02PM -0500, David Huard wrote: > > The current timeseries depends on scipy.sandbox.maskedarray, but as you > have > > seen, it doesnt't work so well. > > timeseries can be easily modified to depend on the ma module in the > numpy > > maskedarray branch, but since the branch dates from about two months, > the > > latest changes to numpy haven't been merged and scipy is currently > > imcompatible with the numpy's maskedarray branch. As a consequence, > timeseries > > (depending on scipy.io for instance) won't import. > > I have merged the latest numpy changes into the maskedarray branch. > Please try it again, and report any problems. > > Thanks > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Wed Jan 23 02:26:30 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 23 Jan 2008 09:26:30 +0200 Subject: [SciPy-user] OpenOpt - connecting ALGENCAN In-Reply-To: <5fa79e79-5662-4901-a378-11897aba002a@c4g2000hsg.googlegroups.com> References: <5fa79e79-5662-4901-a378-11897aba002a@c4g2000hsg.googlegroups.com> Message-ID: <4796EC26.6020902@scipy.org> Ensure algencan.py and pywrapper.so are copied into directories from PYTHONPATH. (or add the directory where you have build algencan with Python binding to PYTHONPATH) Regards, D. Orest Kozyar wrote: > I have been able to successfully compile ALGENCAN for Windows XP using > Cygwin and create the python wrapper. The test Python program > provided (algencanma.py) works well. However, I'm not sure what the > next step to "connect" this to the OpenOpt framework is. What files > do I need to copy where? Has anyone been able to do this > successfully? I've looked around on both the ALGENCAN website and the > OpenOpt website and can't find documentation anywhere on this. > > Thanks! > Orest > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From mani.sabri at gmail.com Wed Jan 23 05:01:01 2008 From: mani.sabri at gmail.com (mani sabri) Date: Wed, 23 Jan 2008 13:31:01 +0330 Subject: [SciPy-user] clustering multivariate timeseries Message-ID: <479710ad.06a0100a.32c6.ffff8e76@mx.google.com> Hi Sorry for irrelevant subject.I'm learning data mining with incremental experiments approach and I ended up clustering my sensors (multivariate timeseries)with wavelet 2d transform(pywt)(compressing the data to a single timeseries and using dynamic time wrapping as distance measurement and passing distance matrix to hierarcial clustering algorithm(orange HCA)) but I ended up with a cluster of frames in which sensor A is similar to sensor B! and sensor B is similar to sensor C and so on (far from ideal: sensor A being similar to sensor A and sensor B being similar to sensor B and ...)I'm not surprise because AFAIK 2d wavelet transform looks at data as a 2d picture not different timeseries. How can I solve this problem? Do hmms (hidden markov models) solve this problem? Is there any module available in numpy context? I really appreciate your suggestion. Regards, Mani From stefan at sun.ac.za Wed Jan 23 07:28:17 2008 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 23 Jan 2008 14:28:17 +0200 Subject: [SciPy-user] timeseries import error In-Reply-To: <91cf711d0801221953w718020bfo30822df89fa16deb@mail.gmail.com> References: <200801181553.13213.w.richert@gmx.net> <91cf711d0801180909q7babc52dk9cd19529f888e876@mail.gmail.com> <20080118232049.GE19790@mentat.za.net> <91cf711d0801221953w718020bfo30822df89fa16deb@mail.gmail.com> Message-ID: <20080123122817.GH25779@mentat.za.net> On Tue, Jan 22, 2008 at 10:53:23PM -0500, David Huard wrote: > everything looks fine, except for the __init__ file, where import add_newdocs > should be called before import ma. This is because ma docstrings append > elements to numpy docstrings that are defined in add_newdocs. Please try the current version, it should be fixed now. Thanks St?fan From hetland at tamu.edu Wed Jan 23 07:59:13 2008 From: hetland at tamu.edu (Rob Hetland) Date: Wed, 23 Jan 2008 13:59:13 +0100 Subject: [SciPy-user] numpy array in ctype struct In-Reply-To: <47964C32.7030303@enthought.com> References: <20080115070726.A460060@jazz.ncnr.nist.gov> <478CCA3A.5060505@enthought.com> <20080115130429.F462016@jazz.ncnr.nist.gov> <3F8F685B-DFA0-4771-AEE9-A34C051B940E@tamu.edu> <4794EFD0.3080902@enthought.com> <47964C32.7030303@enthought.com> Message-ID: <2E8178C9-05EE-4BCC-B25F-4A597116772B@tamu.edu> On Jan 22, 2008, at 9:04 PM, Travis E. Oliphant wrote: >> >> Are there any pitfalls to this approach? It seems to work perfectly >> for the test cases I have done. >> > Some minor ones: > > 1) you have to remember to free the malloc'd pointers. This seems to be a problem with my code (below). Or, more likely, my lack of skill with C. I checked it out, and indeed there is a memory leak, that I suspect is related to the pointer array not being freed. When I try to free the **double array, it seems to also remove the original numpy array that was passed with *pts1. I tried de-referencing the pointer first, but that also fails. What is the best way to release the temporary **double pointers without releasing the associated memory? -Rob extern "C" void* init_kd_tree(double *pts1, int nPts, int dim) { int i; double **pts; ANNkd_tree *kdTree; /// Convert (*double) input array into a **double pts = (double **)malloc(nPts * sizeof(double *)); for (i=0; i References: <20080115070726.A460060@jazz.ncnr.nist.gov> <478CCA3A.5060505@enthought.com> <20080115130429.F462016@jazz.ncnr.nist.gov> <3F8F685B-DFA0-4771-AEE9-A34C051B940E@tamu.edu> <4794EFD0.3080902@enthought.com> <47964C32.7030303@enthought.com> <2E8178C9-05EE-4BCC-B25F-4A597116772B@tamu.edu> Message-ID: <479752E7.9070801@ou.edu> Rob Hetland wrote: > On Jan 22, 2008, at 9:04 PM, Travis E. Oliphant wrote: > >>> Are there any pitfalls to this approach? It seems to work perfectly >>> for the test cases I have done. >>> >> Some minor ones: >> >> 1) you have to remember to free the malloc'd pointers. > > This seems to be a problem with my code (below). Or, more likely, my > lack of skill with C. > > I checked it out, and indeed there is a memory leak, that I suspect > is related to the pointer array not being freed. When I try to free > the **double array, it seems to also remove the original numpy array > that was passed with *pts1. I tried de-referencing the pointer > first, but that also fails. > > What is the best way to release the temporary **double pointers > without releasing the associated memory? > > -Rob > > > > extern "C" void* init_kd_tree(double *pts1, int nPts, int dim) { > > int i; > double **pts; > ANNkd_tree *kdTree; > > /// Convert (*double) input array into a **double > pts = (double **)malloc(nPts * sizeof(double *)); > for (i=0; i pts[i] = pts1 + i*dim; > } > > /// Initialize and return new kd_tree object. > kdTree = new ANNkd_tree(pts, nPts, dim); > > for (i=0; i *pts[i] = NULL; > } I may be grokking this incorrectly, but I think your problem lies here. I'm not sure what you're trying to accomplish with this loop (since setting to NULL isn't necessary before free-ing). What it actually does is set the value at the memory address located at pts[i] to NULL (0). This will end up modifying the values in pts1 itself. (Since pts[i] = pts1 + i*dim, then *pts[i] is the same as pts1[i*dim].) > > free(pts); /// this seems to also delete the input data (pts1) > in python.. I'm not sure this is really the case. It looks like a perfect match for the malloc above. > > return kdTree; > } > > /// below other methods that deal with the object. > If that loop isn't causing the problem, then the problem likely lies elsewhere. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From david.huard at gmail.com Wed Jan 23 10:23:15 2008 From: david.huard at gmail.com (David Huard) Date: Wed, 23 Jan 2008 10:23:15 -0500 Subject: [SciPy-user] timeseries import error In-Reply-To: <20080123122817.GH25779@mentat.za.net> References: <200801181553.13213.w.richert@gmx.net> <91cf711d0801180909q7babc52dk9cd19529f888e876@mail.gmail.com> <20080118232049.GE19790@mentat.za.net> <91cf711d0801221953w718020bfo30822df89fa16deb@mail.gmail.com> <20080123122817.GH25779@mentat.za.net> Message-ID: <91cf711d0801230723g139dd5ecp5c3846e74411db65@mail.gmail.com> It is. Thanks. David 2008/1/23, Stefan van der Walt : > > On Tue, Jan 22, 2008 at 10:53:23PM -0500, David Huard wrote: > > everything looks fine, except for the __init__ file, where import > add_newdocs > > should be called before import ma. This is because ma docstrings append > > elements to numpy docstrings that are defined in add_newdocs. > > Please try the current version, it should be fixed now. > > Thanks > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed Jan 23 10:34:56 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 24 Jan 2008 00:34:56 +0900 Subject: [SciPy-user] [ANN] numscons 0.3.0 release Message-ID: <47975EA0.1050504@ar.media.kyoto-u.ac.jp> Hi, I've just released the 0.3.0 release of numscons, an alternative build system for numpy. The tarballs are available on launchpad. https://launchpad.net/numpy.scons.support/0.3/0.3.0 To use it, you need to get the build_with_scons numpy branch: see http://projects.scipy.org/scipy/numpy/wiki/NumpyScons for more details. This release is an important milestone: - all regressions because of the split from numpy are fixed. - it can build numpy on linux (gcc/intel), mac os X (gcc), windows (mingw) and solaris (Sun compilers). - mkl, sunperf, accelerate/vecLib frameworks and ATLAS should work on the platforms where it makes sense. - a lot of internal changes: some basic unittest, a total revamp of the code to check for performance libraries. - almost all changes necessary to scons code are now included upstream, or pending review. If you test it and has problems building numpy, please submit a bug to launchpad: https://bugs.launchpad.net/numpy.scons.support/0.3/ Thanks, David From timmichelsen at gmx-topmail.de Wed Jan 23 15:06:20 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Wed, 23 Jan 2008 21:06:20 +0100 Subject: [SciPy-user] timeseries import error In-Reply-To: References: <200801181553.13213.w.richert@gmx.net> <91cf711d0801180909q7babc52dk9cd19529f888e876@mail.gmail.com> Message-ID: <47979E3C.9050805@gmx-topmail.de> Hello Matt and others interested in timeseries! I really would be delighted to see timeseries in a scikit and stabilized. Windows builds would be a great cross-platform plus. But, OKI, let'S be patient ;-) my Scipy / Python learning curve is still high. Therefore, my contibution my be a basic one. > plus the ever present... > - add more unit tests This later when I know more about this... > - improve documentation I can help out here. Do you have any guidelines/ideas? I can imagine: Timeseries/FAQ & Timeseries/Recipies The would be based upon my own short tests of the package and enough to get one started. Kind regards, Timmie From yennifersantiago at gmail.com Wed Jan 23 17:39:43 2008 From: yennifersantiago at gmail.com (Yennifer Santiago) Date: Wed, 23 Jan 2008 18:39:43 -0400 Subject: [SciPy-user] Install_SciPy Message-ID: <41bc705b0801231439h284d30a8g6d1936ad1fd1d592@mail.gmail.com> > Hello... > > I want to install Scipy 0.6.0, I already install the packages python, numpy, > atlas, lapack an SciPy, but when I make the test, after intall SciPy: > > carolina at carolinapc:~$ import scipy > carolina at carolinapc:~$ scipy.test(level=1) > bash: error de sintaxis cerca de token no esperado `level=1' > > What is the problem?? > > Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Wed Jan 23 17:45:40 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 23 Jan 2008 15:45:40 -0700 Subject: [SciPy-user] Install_SciPy In-Reply-To: <41bc705b0801231439h284d30a8g6d1936ad1fd1d592@mail.gmail.com> References: <41bc705b0801231439h284d30a8g6d1936ad1fd1d592@mail.gmail.com> Message-ID: On Jan 23, 2008 3:39 PM, Yennifer Santiago wrote: > > > Hello... > > > > I want to install Scipy 0.6.0, I already install the packages python, > numpy, > > atlas, lapack an SciPy, but when I make the test, after intall SciPy: > > > > carolina at carolinapc :~$ import scipy > > carolina at carolinapc:~$ scipy.test(level=1) > > bash: error de sintaxis cerca de token no esperado `level=1' > > > > What is the problem?? Jarrod provided you with a reply yesterday. Did that not help? If not, please provide further details. He specifically indicated that you were running in the system shell, while you need to execute the above commands in a python prompt: planck[cv]> python Python 2.5.1 (r251:54863, May 2 2007, 16:56:35) [GCC 4.1.2 (Ubuntu 4.1.2-0ubuntu4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy ...etc... Please keep your responses on the original thread, so that someone can actually help you, rather than restating the same question over and over. Regards, f From hetland at tamu.edu Thu Jan 24 02:39:24 2008 From: hetland at tamu.edu (Rob Hetland) Date: Thu, 24 Jan 2008 08:39:24 +0100 Subject: [SciPy-user] numpy array in ctype struct In-Reply-To: <479752E7.9070801@ou.edu> References: <20080115070726.A460060@jazz.ncnr.nist.gov> <478CCA3A.5060505@enthought.com> <20080115130429.F462016@jazz.ncnr.nist.gov> <3F8F685B-DFA0-4771-AEE9-A34C051B940E@tamu.edu> <4794EFD0.3080902@enthought.com> <47964C32.7030303@enthought.com> <2E8178C9-05EE-4BCC-B25F-4A597116772B@tamu.edu> <479752E7.9070801@ou.edu> Message-ID: On Jan 23, 2008, at 3:44 PM, Ryan May wrote: > I may be grokking this incorrectly, but I think your problem lies > here. > I'm not sure what you're trying to accomplish with this loop (since > setting to NULL isn't necessary before free-ing). What it actually > does > is set the value at the memory address located at pts[i] to NULL (0). > This will end up modifying the values in pts1 itself. (Since pts[i] = > pts1 + i*dim, then *pts[i] is the same as pts1[i*dim].) This part wasn't intended to be the 'real' code, rather I just was trying different things out. I guess I didn't explain this well enough in my last message. Resetting the pointers like this does not seem to change the numpy array passed to the *double pointer, whereas freeing the new **double does erase those values. >> >> free(pts); /// this seems to also delete the input data (pts1) >> in python.. > > I'm not sure this is really the case. It looks like a perfect > match for > the malloc above. Yes, that's what I thought, but it doesn't seem to work like I thought it would. > If that loop isn't causing the problem, then the problem likely lies > elsewhere. Yes, after your comments, it seems that is the only thing left.. I was just checking to see if someone had run across this before. I had tested for other possible problems, but I guess I will have to keep looking. -Rob ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From dwarnold45 at suddenlink.net Thu Jan 24 02:51:36 2008 From: dwarnold45 at suddenlink.net (David Arnold) Date: Wed, 23 Jan 2008 23:51:36 -0800 Subject: [SciPy-user] Differential equations Message-ID: <86E3DE9D-53EE-446D-8ECC-1194748B3CAD@suddenlink.net> All, If I have matplotlib, numpy, and scipy installed, is there a good tutorial somewhere for plotting numerical solutions of differential equations? David. From david.huard at gmail.com Thu Jan 24 09:50:54 2008 From: david.huard at gmail.com (David Huard) Date: Thu, 24 Jan 2008 09:50:54 -0500 Subject: [SciPy-user] timeseries import error In-Reply-To: <47979E3C.9050805@gmx-topmail.de> References: <200801181553.13213.w.richert@gmx.net> <91cf711d0801180909q7babc52dk9cd19529f888e876@mail.gmail.com> <47979E3C.9050805@gmx-topmail.de> Message-ID: <91cf711d0801240650o25521059lb8d998033a9c8f5d@mail.gmail.com> Hi Tim, timeseries is the scikits since a couple of days, thanks to Matt. This is the svn url: http://svn.scipy.org/svn/scikits/trunk Cheers, David 2008/1/23, Tim Michelsen : > > Hello Matt and others interested in timeseries! > > I really would be delighted to see timeseries in a scikit and > stabilized. Windows builds would be a great cross-platform plus. But, > OKI, let'S be patient ;-) > > my Scipy / Python learning curve is still high. Therefore, my > contibution my be a basic one. > > > plus the ever present... > > - add more unit tests > This later when I know more about this... > > > - improve documentation > I can help out here. Do you have any guidelines/ideas? > I can imagine: Timeseries/FAQ & Timeseries/Recipies > The would be based upon my own short tests of the package and enough to > get one started. > > Kind regards, > Timmie > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From denisbz at t-online.de Thu Jan 24 12:25:11 2008 From: denisbz at t-online.de (denis bzowy) Date: Thu, 24 Jan 2008 17:25:11 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?scipy_with_mac_10=2E4_vecLib=2C_undefined_?= =?utf-8?q?dso=5Fhandle_=3F?= References: <47949F9D.8060200@t-online.de> Message-ID: More on building scipy from source on Mac Os X 10.4 ppc: do NOT say [build_ext] link_objects = /System/Library/Frameworks/Accelerate.framework\ /Versions/A/Accelerate in site.cfg or ~/.pydistutils.cfg, because build will deep crash like this: Traceback (most recent call last): ... File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/\ distutils/unixccompiler.py", line 222, in link lib_opts + ['-o', output_filename]) TypeError: can only concatenate list (not "str") to list Without this link_objects, numpy and scipy find the .../Accelerate.framework by themselves and build ok (macosx 10.4.11 + Xcode 2.4.1 fixed the undefined dso_handle) BUT it shouldn't crash ... Can anyone point me to a nice long sample site.cfg for Macs please ? And a WIBNI (wouldn't it be nice if): setup.py could write a site-merged.cfg that echoes all its input files and options ? cheers -- denis From berthe.loic at gmail.com Thu Jan 24 13:05:03 2008 From: berthe.loic at gmail.com (LB) Date: Thu, 24 Jan 2008 10:05:03 -0800 (PST) Subject: [SciPy-user] Differential equations In-Reply-To: <86E3DE9D-53EE-446D-8ECC-1194748B3CAD@suddenlink.net> References: <86E3DE9D-53EE-446D-8ECC-1194748B3CAD@suddenlink.net> Message-ID: You can have a look at http://www.scipy.org/LoktaVolterraTutorial If it don't fit your needs, can you be a bit more precise about what you need to plot ? Cheers, -- LB From ndbecker2 at gmail.com Thu Jan 24 14:00:35 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 24 Jan 2008 14:00:35 -0500 Subject: [SciPy-user] openopt - any examples work? Message-ID: I just grabbed openopt svn (linux-fedora-f8) and did python setup.py install. I randomly tried 2 examples: python example.py starting solver ralg (license: BSD) with problem unnamed Traceback (most recent call last): File "example.py", line 66, in exampleNSP() File "example.py", line 49, in exampleNSP r = p.solve('ralg') # ralg is name of a solver File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", line 211, in solve return runProbSolver(self, solvers, *args) File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", line 146, in runProbSolver solver(p) File "/usr/lib/python2.5/site-packages/scikits/openopt/solvers/UkrOpt/ralg.py", line 62, in __solver__ g, fname, ind = self.getRalgDirection(x, p) File "/usr/lib/python2.5/site-packages/scikits/openopt/solvers/UkrOpt/ralg.py", line 202, in getRalgDirection maxRes, fname, ind = p.getMaxResidual(x, retAll=True) TypeError: getMaxResidual() got an unexpected keyword argument 'retAll' python nlp_1.py Traceback (most recent call last): File "nlp_1.py", line 108, in r = p.solve('ALGENCAN') File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", line 211, in solve return runProbSolver(self, solvers, *args) File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", line 23, in runProbSolver solverClass = getattr(__import__(solver_str), solver_str) File "/usr/lib/python2.5/site-packages/scikits/openopt/solvers/BrasilOpt/ALGENCAN.py", line 4, in import algencan ImportError: No module named algencan Not a good start. From dmitrey.kroshko at scipy.org Thu Jan 24 14:32:24 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 24 Jan 2008 21:32:24 +0200 Subject: [SciPy-user] openopt - any examples work? In-Reply-To: References: Message-ID: <4798E7C8.4020504@scipy.org> example 2 requires algencan installed example 1: I forget to commit latest changes to svn, please update the one or use stable version openopt 0.15 from oo install page thank you for bug report Regards, D. Neal Becker wrote: > I just grabbed openopt svn (linux-fedora-f8) and did python setup.py > install. > I randomly tried 2 examples: > > python example.py > starting solver ralg (license: BSD) with problem unnamed > Traceback (most recent call last): > File "example.py", line 66, in > exampleNSP() > File "example.py", line 49, in exampleNSP > r = p.solve('ralg') # ralg is name of a solver > > File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", > line 211, in solve > return runProbSolver(self, solvers, *args) > > File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", > line 146, in runProbSolver > solver(p) > > File "/usr/lib/python2.5/site-packages/scikits/openopt/solvers/UkrOpt/ralg.py", > line 62, in __solver__ > g, fname, ind = self.getRalgDirection(x, p) > > File "/usr/lib/python2.5/site-packages/scikits/openopt/solvers/UkrOpt/ralg.py", > line 202, in getRalgDirection > maxRes, fname, ind = p.getMaxResidual(x, retAll=True) > TypeError: getMaxResidual() got an unexpected keyword argument 'retAll' > > python nlp_1.py > Traceback (most recent call last): > File "nlp_1.py", line 108, in > r = p.solve('ALGENCAN') > > File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/BaseProblem.py", > line 211, in solve > return runProbSolver(self, solvers, *args) > > File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/runProbSolver.py", > line 23, in runProbSolver > solverClass = getattr(__import__(solver_str), solver_str) > > File "/usr/lib/python2.5/site-packages/scikits/openopt/solvers/BrasilOpt/ALGENCAN.py", > line 4, in > import algencan > ImportError: No module named algencan > > Not a good start. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From timmichelsen at gmx-topmail.de Thu Jan 24 15:21:14 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Thu, 24 Jan 2008 21:21:14 +0100 Subject: [SciPy-user] timeseries import error In-Reply-To: <91cf711d0801240650o25521059lb8d998033a9c8f5d@mail.gmail.com> References: <200801181553.13213.w.richert@gmx.net> <91cf711d0801180909q7babc52dk9cd19529f888e876@mail.gmail.com> <47979E3C.9050805@gmx-topmail.de> <91cf711d0801240650o25521059lb8d998033a9c8f5d@mail.gmail.com> Message-ID: Hi! > timeseries is the scikits since a couple of days, thanks to Matt. This > is the svn url: > > http://svn.scipy.org/svn/scikits/trunk Does that mean that they will be automatically included in the scipy binary builds in the near future? I would like to have binaries for Windows and Ubuntu. Kind regards, Timmie From f.braennstroem at gmx.de Thu Jan 24 16:45:45 2008 From: f.braennstroem at gmx.de (Fabian Braennstroem) Date: Thu, 24 Jan 2008 21:45:45 +0000 Subject: [SciPy-user] compare two csv files In-Reply-To: References: Message-ID: Hi, me again ... stupid question or does noone have an idea? Fabian Fabian Braennstroem schrieb am 01/22/2008 08:58 PM: > Hi, > I would like to compare two csv file; actually two columns > from two csv files. > I would use something like: > def read_test(): > start = time.clock() > reader = csv.reader( file('data.txt') ) > data = [ map(float, row) for row in reader ] > data = array(data, dtype = float) > > To get my data into an array. > > Does anyone have an idea, how to compare the two columns? > Would be nice! > Fabian From aisaac at american.edu Thu Jan 24 15:49:06 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 24 Jan 2008 15:49:06 -0500 Subject: [SciPy-user] compare two csv files In-Reply-To: References: Message-ID: On Thu, 24 Jan 2008, Fabian Braennstroem apparently wrote: > I would like to compare two csv file; actually two columns > from two csv files. I do not understand your question. I take it that you put the data into two arrays, and then you want to "compare" two columns, one from each array? >>> x = N.random.rand(5,3) >>> y = N.random.rand(5,3) >>> x[:,1]>y[:,2] array([False, True, True, False, True], dtype=bool) >>> Cheers, Alan Isaac From gael.varoquaux at normalesup.org Thu Jan 24 15:49:37 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 24 Jan 2008 21:49:37 +0100 Subject: [SciPy-user] compare two csv files In-Reply-To: References: Message-ID: <20080124204937.GC24328@phare.normalesup.org> On Thu, Jan 24, 2008 at 09:45:45PM +0000, Fabian Braennstroem wrote: > Hi, > me again ... > stupid question or does noone have an idea? d = data[:, 0] - data[:,0] # Indices where the data differ: arange(len(d))[d <> 0] Of course this works only if you have numerical data. If you want something more specific, please give more info on what you are trying to do. Ga?l > Fabian Braennstroem schrieb am 01/22/2008 08:58 PM: > > Hi, > > I would like to compare two csv file; actually two columns > > from two csv files. > > I would use something like: > > def read_test(): > > start = time.clock() > > reader = csv.reader( file('data.txt') ) > > data = [ map(float, row) for row in reader ] > > data = array(data, dtype = float) > > To get my data into an array. > > Does anyone have an idea, how to compare the two columns? From strawman at astraw.com Thu Jan 24 17:18:43 2008 From: strawman at astraw.com (Andrew Straw) Date: Thu, 24 Jan 2008 14:18:43 -0800 Subject: [SciPy-user] compare two csv files In-Reply-To: References: Message-ID: <47990EC3.1070304@astraw.com> Hi Fabian, this is not a direct answer to your question, but you also may be intrested in matplotlib's mlab.csv2rec() which automatically creates a recordarray from a csv file. John Hunter and I, to much lesser degree, have been hacking on this to work for us. Please feel free to check its suitability for your purposes. Fabian Braennstroem wrote: > Hi, > I would like to compare two csv file; actually two columns > from two csv files. > I would use something like: > def read_test(): > start = time.clock() > reader = csv.reader( file('data.txt') ) > data = [ map(float, row) for row in reader ] > data = array(data, dtype = float) > > To get my data into an array. > > Does anyone have an idea, how to compare the two columns? > Would be nice! > Fabian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From mattknox_ca at hotmail.com Thu Jan 24 18:32:43 2008 From: mattknox_ca at hotmail.com (Matt Knox) Date: Thu, 24 Jan 2008 23:32:43 +0000 (UTC) Subject: [SciPy-user] timeseries import error References: <200801181553.13213.w.richert@gmx.net> <91cf711d0801180909q7babc52dk9cd19529f888e876@mail.gmail.com> <47979E3C.9050805@gmx-topmail.de> <91cf711d0801240650o25521059lb8d998033a9c8f5d@mail.gmail.com> Message-ID: > > timeseries is the scikits since a couple of days, thanks to Matt. This > > is the svn url: > > > > http://svn.scipy.org/svn/scikits/trunk > Does that mean that they will be automatically included in the scipy > binary builds in the near future? > > I would like to have binaries for Windows and Ubuntu. I won't be providing any binaries until there has been an official release of numpy using the new version of the masked array module. I can't provide a timeline for that because I am not really involved in that process. But keep your eyes peeled for discussion on a numpy 1.0.5 release since that it is when it is likely to happen from my understanding. My intention is to provide windows binaries for Python 2.4 and 2.5 eventually. Python 2.3 won't be supported because the timeseries module relies on features of the datetime C api that were introduced in Python 2.4 (although if someone felt that they could remove that dependency without making the code significantly more complicated, I'd certainly be open to it). As far as ubuntu binaries... I'm a die hard windows guy so I won't be of any help here. It should be extremely simple to build on ubuntu though since linux comes setup with compilers and there are no external dependencies for the timeseries module outside of numpy and scipy itself (and scipy is only a run time requirement, not needed for building it). - Matt From ndbecker2 at gmail.com Thu Jan 24 19:11:01 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 24 Jan 2008 19:11:01 -0500 Subject: [SciPy-user] openopt - any examples work? References: <4798E7C8.4020504@scipy.org> Message-ID: Thanks for your help! Very impressive. I don't know much about this subject matter. Can anyone suggest where I can get some introduction (preferably online), explaining the background: what types of problems are suitable to which of these solvers? From nwagner at iam.uni-stuttgart.de Fri Jan 25 01:06:25 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 25 Jan 2008 07:06:25 +0100 Subject: [SciPy-user] openopt - any examples work? In-Reply-To: References: <4798E7C8.4020504@scipy.org> Message-ID: On Thu, 24 Jan 2008 19:11:01 -0500 Neal Becker wrote: > Thanks for your help! Very impressive. > > I don't know much about this subject matter. Can anyone >suggest where I can > get some introduction (preferably online), explaining >the background: what > types of problems are suitable to which of these >solvers? > http://plato.asu.edu/guide.html Nils From dmitrey.kroshko at scipy.org Fri Jan 25 03:02:48 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 25 Jan 2008 10:02:48 +0200 Subject: [SciPy-user] openopt - any examples work? In-Reply-To: References: <4798E7C8.4020504@scipy.org> Message-ID: <479997A8.2050103@scipy.org> I don't know what do you mean exactly. Along with link provided by Nils you could just take a look at http://scipy.org/scipy/scikits/wiki/OOClasses, classify your problem(s) mathematically and then choose appropriate solver(s) Regards, D. Neal Becker wrote: > Thanks for your help! Very impressive. > > I don't know much about this subject matter. Can anyone suggest where I can > get some introduction (preferably online), explaining the background: what > types of problems are suitable to which of these solvers? > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From david at ar.media.kyoto-u.ac.jp Fri Jan 25 04:41:14 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 25 Jan 2008 18:41:14 +0900 Subject: [SciPy-user] numscons, available as python egg Message-ID: <4799AEBA.3080103@ar.media.kyoto-u.ac.jp> Hi, Sorry for the flooding, but I finally managed to build an egg and put it on the web, so numscons is available as an egg, now. You should be able to install it using easy_install, e.g easy_install numscons should work. cheers, David From emanuele at relativita.com Fri Jan 25 05:33:40 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Fri, 25 Jan 2008 11:33:40 +0100 Subject: [SciPy-user] openopt: which NLP solver? Message-ID: <4799BB04.4010600@relativita.com> Hi, I've just installed openopt and need to minimize a function of many variables (100 to 10000, depending on the configuration) I need some help to find the correct solver among the many available in this very interesting openopt package. My case is a non-linear problem with simple constraints (all variables >0). The function is smooth according to what I know and I have worked out the analytical gradient. I already implemented everything (f and fprime) in python and tested using standard scipy.optimize.fmin_cg solver [0]. It works somewhat but: - it is not stable, i.e. there are sudden jumps sometimes after many many iterations in which it seems to converge (and in those cases the final solution is usually worse than before the jump) - it has no memory: the value returned by fmin_cg is the one of the last step and not the minimum value of all attempts made (and the different is quite relevant in my case) - it is possible that the evaluation of my function and gradient suffers some numerical instabilities - evaluation of the function takes seconds and evaluation of the gradient takes many seconds (even minutes) so I cannot wait for a huge number of iterations that fmin_cg seems to require - starting from different intial points I got different (local?) minima quite each time when the number of variables increases. Could it be that fmin_cg becomes unstable on large problems? If someone (dmitrey?) could help selecting most appropriate solver in openopt it would be much appreciated. In the meawhile I'll try 'ralg'. Thanks in advance, Emanuele P.S.: I'm having some troubles building openopt in ubuntu gutsy. "sudo python setup.py install" works but "python setup.py build" does not, requiring a previously installed "scikits" package. How can I install openopt in a custom path instead of /usr/local/lib/... ? [0]: by the way fmin_cg does not handle constraints, at least in standard scipy, which forced me to use abs() in many places. This could be a source of instabilities when computing gradient. From ndbecker2 at gmail.com Fri Jan 25 06:20:20 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 25 Jan 2008 06:20:20 -0500 Subject: [SciPy-user] [ANN] numscons 0.3.0 release References: <47975EA0.1050504@ar.media.kyoto-u.ac.jp> Message-ID: Is numscons specific to numpy/scipy, or is it for building arbitrary python extensions (replacing distutils?). I'm hoping for the latter. From david at ar.media.kyoto-u.ac.jp Fri Jan 25 06:30:34 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 25 Jan 2008 20:30:34 +0900 Subject: [SciPy-user] [ANN] numscons 0.3.0 release In-Reply-To: References: <47975EA0.1050504@ar.media.kyoto-u.ac.jp> Message-ID: <4799C85A.80709@ar.media.kyoto-u.ac.jp> Neal Becker wrote: > Is numscons specific to numpy/scipy, or is it for building arbitrary python > extensions (replacing distutils?). I'm hoping for the latter. Depends on what you mean by numpy/scipy distutils: it can certainly build any python extensions. It does not replace distutils (numscons uses distutils for packaging, compiling py to pyc, pyo, etc...). In theory, it would be possible to use scons for everything, but this would be a huge task I will certainly not tackle in the foreseeable future. cheers, David From dmitrey.kroshko at scipy.org Fri Jan 25 06:47:06 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 25 Jan 2008 13:47:06 +0200 Subject: [SciPy-user] openopt: which NLP solver? In-Reply-To: <4799BB04.4010600@relativita.com> References: <4799BB04.4010600@relativita.com> Message-ID: <4799CC3A.6070802@scipy.org> Emanuele Olivetti wrote: > Hi, > > I've just installed openopt and need to minimize a function > of many variables (100 to 10000, depending on the configuration) > I need some help to find the correct solver among the many > available in this very interesting openopt package. > > > - it has no memory: the value returned by fmin_cg is the one of the last > step and not the minimum value of all attempts made (and the different > is quite relevant in my case) > as well as many other solvers, including even ALGENCAN. Maybe I 'll provide native OO handling of the situation but, on the other hand, usually it means something incorrect with your own funcs, for example it has lots of local minima or incorrect gradients. > - it is possible that the evaluation of my function and gradient suffers > some numerical instabilities > you should investigate is it so or not. Using p.check.df=1 could be very helpful (see openopt doc page, "auto check derivatives" chapter). If your 1st derivatives really have instabilities (noise, non-smooth - for example, using abs(...)) - then only ralg can handle the problem. On the other hand, current OpenOpt ralg implementation is still far from perfect (at least from ralg fortran version by our dept). However, ralg is for medium-scale problems with nVars up to ~1000, not 10000 as you have. It handles matrix b of shape(nVars,nVars) in memory, and requires 4..5*nVars^2 multiplication operations each iter. If no problems with smooth and noise and 1st derivatives obtain, I would recommend you scipy_lbfgsb, ALGENCAN, scipy_tnc, maybe scipy_slsqp; lincher also can serve, but it's very primitive. Here's full list: http://scipy.org/scipy/scikits/wiki/NLP > > If someone (dmitrey?) could help selecting most appropriate solver > in openopt it would be much appreciated. > In the meawhile I'll try 'ralg'. > > > Thanks in advance, > > Emanuele > > P.S.: I'm having some troubles building openopt in ubuntu gutsy. > "sudo python setup.py install" works but "python setup.py build" > does not, requiring a previously installed "scikits" package. How > can I install openopt in a custom path instead of /usr/local/lib/... ? > Try using "python setup.py", it must ask you what to do (cho0se install) and destination (choose your own) > >> My case is a non-linear problem with simple constraints >> (all variables >0). The function is smooth according to what >> I know and I have worked out the analytical gradient. I already >> implemented everything (f and fprime) in python and tested using >> standard scipy.optimize.fmin_cg solver [0]. It works somewhat but: >> >> - it is not stable, i.e. there are sudden jumps sometimes after many >> many iterations in which it seems to converge (and in those cases >> the final solution is usually worse than before the jump) >> - evaluation of the function takes seconds and evaluation of the >> gradient takes many seconds (even minutes) so I cannot wait for >> a huge number of iterations that fmin_cg seems to require >> - starting from different intial points I got different (local?) >> minima quite each time when the number of variables increases. Could >> it be that fmin_cg becomes unstable on large problems? >> >> > [0]: by the way fmin_cg does not handle constraints, at least in > standard scipy, which forced me to use abs() in many places. This > could be a source of instabilities when computing gradient. > I guess you use very obsolete OpenOpt version. I had implemented (ling time ago) yielding error message "the solver scipy_cg cannot handle 'lb' constraints" (at least I get the one with trying to pass lb constraints to scipy_cg) Of course, since scipy.optimize.fmin_cg can't handle constraints, so does OO scipy_cg. Regards, D. From matthieu.brucher at gmail.com Fri Jan 25 06:55:33 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 25 Jan 2008 12:55:33 +0100 Subject: [SciPy-user] openopt: which NLP solver? In-Reply-To: <4799CC3A.6070802@scipy.org> References: <4799BB04.4010600@relativita.com> <4799CC3A.6070802@scipy.org> Message-ID: > > > P.S.: I'm having some troubles building openopt in ubuntu gutsy. > > "sudo python setup.py install" works but "python setup.py build" > > does not, requiring a previously installed "scikits" package. How > > can I install openopt in a custom path instead of /usr/local/lib/... ? > > > Try using "python setup.py", it must ask you what to do (cho0se install) > and destination (choose your own) This is a known issue that dmitrey _must_ fix when he has time. The issue shows up because a lot of paths are added to sys.path, Robert already gave him the solution. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Fri Jan 25 07:04:08 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 25 Jan 2008 14:04:08 +0200 Subject: [SciPy-user] openopt: which NLP solver? In-Reply-To: References: <4799BB04.4010600@relativita.com> <4799CC3A.6070802@scipy.org> Message-ID: <4799D038.60401@scipy.org> Matthieu Brucher wrote: > > > P.S.: I'm having some troubles building openopt in ubuntu gutsy. > > "sudo python setup.py install" works but "python setup.py build" > > does not, requiring a previously installed "scikits" package. How > > can I install openopt in a custom path instead of > /usr/local/lib/... ? > > > Try using "python setup.py", it must ask you what to do (cho0se > install) > and destination (choose your own) > > > This is a known issue that dmitrey _must_ fix when he has time. The > issue shows up because a lot of paths are added to sys.path, Robert > already gave him the solution. > > Matthieu Yes, and this is one of issues why (as I had informed some month ago) I don't want openopt being part of scipy: I have no enough time to satisfy all those (changing from time to time) strict requirements to docstring standards, unittests, build system etc. I still intend to fix the build issue, but in future, when I'll have enough time. It requires changing "from ... import ..." for all openopt files. Regards, D. From david at ar.media.kyoto-u.ac.jp Fri Jan 25 08:21:15 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 25 Jan 2008 22:21:15 +0900 Subject: [SciPy-user] openopt: which NLP solver? In-Reply-To: <4799D038.60401@scipy.org> References: <4799BB04.4010600@relativita.com> <4799CC3A.6070802@scipy.org> <4799D038.60401@scipy.org> Message-ID: <4799E24B.8080601@ar.media.kyoto-u.ac.jp> dmitrey wrote: > Matthieu Brucher wrote: > >> > P.S.: I'm having some troubles building openopt in ubuntu gutsy. >> > "sudo python setup.py install" works but "python setup.py build" >> > does not, requiring a previously installed "scikits" package. How >> > can I install openopt in a custom path instead of >> /usr/local/lib/... ? >> > >> Try using "python setup.py", it must ask you what to do (cho0se >> install) >> and destination (choose your own) >> >> >> This is a known issue that dmitrey _must_ fix when he has time. The >> issue shows up because a lot of paths are added to sys.path, Robert >> already gave him the solution. >> >> Matthieu >> > Yes, and this is one of issues why (as I had informed some month ago) I > don't want openopt being part of scipy: I have no enough time to satisfy > all those (changing from time to time) strict requirements to docstring > standards, unittests, build system etc. The above issue has nothing to do with docstring or test requirement: it is a basic python requirement, and has not changed for years. If you do not have time to change for this, someone else may have the time, by the way :) cheers, David From dominique.orban at gmail.com Fri Jan 25 09:10:28 2008 From: dominique.orban at gmail.com (Dominique Orban) Date: Fri, 25 Jan 2008 09:10:28 -0500 Subject: [SciPy-user] Symmetric sparse matrices ? Message-ID: <8793ae6e0801250610k4280e77bx5b724ec9f6bb6857@mail.gmail.com> Hello, In SciPy, what an efficient way to create a sparse matrix and specify that it is symmetric, so that only one triangle will be stored? Thanks, Dominique From wnbell at gmail.com Fri Jan 25 09:28:59 2008 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 25 Jan 2008 08:28:59 -0600 Subject: [SciPy-user] Symmetric sparse matrices ? In-Reply-To: <8793ae6e0801250610k4280e77bx5b724ec9f6bb6857@mail.gmail.com> References: <8793ae6e0801250610k4280e77bx5b724ec9f6bb6857@mail.gmail.com> Message-ID: On Jan 25, 2008 8:10 AM, Dominique Orban wrote: > Hello, > > In SciPy, what an efficient way to create a sparse matrix and specify > that it is symmetric, so that only one triangle will be stored? Currently there's no support for symmetric matrices in SciPy. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From mnandris at btinternet.com Fri Jan 25 13:09:04 2008 From: mnandris at btinternet.com (Michael Nandris) Date: Fri, 25 Jan 2008 18:09:04 +0000 (GMT) Subject: [SciPy-user] openopt: which NLP solver? In-Reply-To: Message-ID: <655449.55051.qm@web86505.mail.ird.yahoo.com> This works on gutsy 7.10 0) mkdir openopt 1) svn co http://svn.scipy.org/svn/scikits/trunk/openopt openopt svn co http://svn.scipy.org/svn/scikits/trunk/delaunay delaunay ( note the 'svn' in the url, not //scipy.org/scipy/.. ) 2) sudo apt-get install python-setuptools 3) sudo python setup.py install 4) test: from scikits.openopt import * Matthieu Brucher wrote: > P.S.: I'm having some troubles building openopt in ubuntu gutsy. > "sudo python setup.py install" works but "python setup.py build" > does not, requiring a previously installed "scikits" package. How > can I install openopt in a custom path instead of /usr/local/lib/... ? > Try using "python setup.py", it must ask you what to do (cho0se install) and destination (choose your own) This is a known issue that dmitrey _must_ fix when he has time. The issue shows up because a lot of paths are added to sys.path, Robert already gave him the solution. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramercer at gmail.com Fri Jan 25 13:39:10 2008 From: ramercer at gmail.com (Adam Mercer) Date: Fri, 25 Jan 2008 13:39:10 -0500 Subject: [SciPy-user] Strange fortran (g95) build error on Mac OS X - not finding fortran compiler In-Reply-To: <799406d60801211140k2961b966jf93095039cf94468@mail.gmail.com> References: <799406d60801211140k2961b966jf93095039cf94468@mail.gmail.com> Message-ID: <799406d60801251039l477ef7ck46a1edcae6ffc8f3@mail.gmail.com> On Jan 21, 2008 2:40 PM, Adam Mercer wrote: > I'm running into a strange problem trying to build scipy-0.6.0 using > the g95-0.90 fortran compiler (from http://www.g95.org) on Mac OS X. following up on this, I've managed to get past this error but am now running into the following problem: /opt/local/bin/g95 -shared -shared build/temp.macosx-10.3-i386-2.5/build/src.macosx-10.3-i386-2.5/scipy/fftpack/_fftpackmodule.o build/temp.macosx-10.3-i386-2.5/scipy/fftpack/src/zfft.o build/temp.macosx-10.3-i386-2.5/scipy/fftpack/src/drfft.o build/temp.macosx-10.3-i386-2.5/scipy/fftpack/src/zrfft.o build/temp.macosx-10.3-i386-2.5/scipy/fftpack/src/zfftnd.o build/temp.macosx-10.3-i386-2.5/build/src.macosx-10.3-i386-2.5/fortranobject.o -L/opt/local/lib -Lbuild/temp.macosx-10.3-i386-2.5 -ldfftpack -lfftw3 -o build/lib.macosx-10.3-i386-2.5/scipy/fftpack/_fftpack.so g95: unrecognized option '-shared' g95: unrecognized option '-shared' Undefined symbols: "_PyExc_AttributeError", referenced from: _PyExc_AttributeError$non_lazy_ptr in fortranobject.o "_PyObject_Str", referenced from: _array_from_pyobj in fortranobject.o "_PyArg_ParseTupleAndKeywords", referenced from: _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_destroy_zfft_cache in _fftpackmodule.o _f2py_rout__fftpack_destroy_zfftnd_cache in _fftpackmodule.o _f2py_rout__fftpack_destroy_drfft_cache in _fftpackmodule.o "_PyExc_ValueError", referenced from: _PyExc_ValueError$non_lazy_ptr in fortranobject.o "_PyExc_TypeError", referenced from: _PyExc_TypeError$non_lazy_ptr in fortranobject.o "_PyDict_GetItemString", referenced from: _fortran_getattr in fortranobject.o "_PyCObject_AsVoidPtr", referenced from: _init_fftpack in _fftpackmodule.o "_Py_BuildValue", referenced from: _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_destroy_zfft_cache in _fftpackmodule.o _f2py_rout__fftpack_destroy_zfftnd_cache in _fftpackmodule.o _f2py_rout__fftpack_destroy_drfft_cache in _fftpackmodule.o "_PyComplex_Type", referenced from: _PyComplex_Type$non_lazy_ptr in _fftpackmodule.o "_PyDict_New", referenced from: _PyFortranObject_NewAsAttr in fortranobject.o _fortran_setattr in fortranobject.o _PyFortranObject_New in fortranobject.o _PyFortranObject_New in fortranobject.o "_PyDict_SetItemString", referenced from: _init_fftpack in _fftpackmodule.o _init_fftpack in _fftpackmodule.o _init_fftpack in _fftpackmodule.o _F2PyDict_SetItemString in fortranobject.o _fortran_getattr in fortranobject.o _fortran_getattr in fortranobject.o _fortran_setattr in fortranobject.o _PyFortranObject_New in fortranobject.o "_PyType_Type", referenced from: _PyType_Type$non_lazy_ptr in _fftpackmodule.o "__PyObject_New", referenced from: _PyFortranObject_NewAsAttr in fortranobject.o _PyFortranObject_New in fortranobject.o _PyFortranObject_New in fortranobject.o "_PyInt_Type", referenced from: _PyInt_Type$non_lazy_ptr in _fftpackmodule.o "_PyString_FromString", referenced from: _init_fftpack in _fftpackmodule.o _init_fftpack in _fftpackmodule.o _fortran_getattr in fortranobject.o _fortran_getattr in fortranobject.o "_PyErr_Occurred", referenced from: _int_from_pyobj in _fftpackmodule.o _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_destroy_zfft_cache in _fftpackmodule.o _f2py_rout__fftpack_destroy_zfftnd_cache in _fftpackmodule.o _f2py_rout__fftpack_destroy_drfft_cache in _fftpackmodule.o _init_fftpack in _fftpackmodule.o _F2PyDict_SetItemString in fortranobject.o "_PyErr_NewException", referenced from: _init_fftpack in _fftpackmodule.o "_PyImport_ImportModule", referenced from: _init_fftpack in _fftpackmodule.o "_PyMem_Free", referenced from: _fortran_dealloc in fortranobject.o _fortran_dealloc in fortranobject.o "_MAIN_", referenced from: _main in libf95.a(main.o) "_PyCObject_Type", referenced from: _PyCObject_Type$non_lazy_ptr in _fftpackmodule.o "_PyExc_ImportError", referenced from: _PyExc_ImportError$non_lazy_ptr in _fftpackmodule.o "_PyErr_Format", referenced from: _init_fftpack in _fftpackmodule.o _fortran_call in fortranobject.o _fortran_call in fortranobject.o "_PyNumber_Int", referenced from: _int_from_pyobj in _fftpackmodule.o "_PyCObject_FromVoidPtr", referenced from: _fortran_getattr in fortranobject.o "_PyObject_GetAttrString", referenced from: _int_from_pyobj in _fftpackmodule.o _init_fftpack in _fftpackmodule.o "_PyErr_Print", referenced from: _init_fftpack in _fftpackmodule.o _F2PyDict_SetItemString in fortranobject.o "_PyString_Type", referenced from: _PyString_Type$non_lazy_ptr in _fftpackmodule.o "__Py_NoneStruct", referenced from: __Py_NoneStruct$non_lazy_ptr in _fftpackmodule.o __Py_NoneStruct$non_lazy_ptr in fortranobject.o "_Py_FindMethod", referenced from: _fortran_getattr in fortranobject.o "_PyString_ConcatAndDel", referenced from: _fortran_getattr in fortranobject.o "_PyErr_Clear", referenced from: _int_from_pyobj in _fftpackmodule.o _F2PyDict_SetItemString in fortranobject.o "_Py_InitModule4", referenced from: _init_fftpack in _fftpackmodule.o "_PyModule_GetDict", referenced from: _init_fftpack in _fftpackmodule.o "_PyExc_RuntimeError", referenced from: _PyExc_RuntimeError$non_lazy_ptr in _fftpackmodule.o _PyExc_RuntimeError$non_lazy_ptr in fortranobject.o "_PyDict_DelItemString", referenced from: _fortran_setattr in fortranobject.o "_PyObject_Type", referenced from: _array_from_pyobj in fortranobject.o "_PySequence_Check", referenced from: _int_from_pyobj in _fftpackmodule.o "_PyString_AsString", referenced from: _array_from_pyobj in fortranobject.o "_PySequence_GetItem", referenced from: _int_from_pyobj in _fftpackmodule.o "_PyErr_SetString", referenced from: _int_from_pyobj in _fftpackmodule.o _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _init_fftpack in _fftpackmodule.o _array_from_pyobj in fortranobject.o _fortran_setattr in fortranobject.o _fortran_setattr in fortranobject.o "_PyType_IsSubtype", referenced from: _int_from_pyobj in _fftpackmodule.o _int_from_pyobj in _fftpackmodule.o _int_from_pyobj in _fftpackmodule.o _array_from_pyobj in fortranobject.o ld: symbol(s) not found Any ideas? Cheers Adam From charles.vejnar at isb-sib.ch Fri Jan 25 13:52:16 2008 From: charles.vejnar at isb-sib.ch (Charles Vejnar) Date: Fri, 25 Jan 2008 19:52:16 +0100 Subject: [SciPy-user] Draw a density line on an histogram Message-ID: <200801251952.17261.charles.vejnar@isb-sib.ch> Hi, I have a series of numbers, and I want to have an idea about their distribution. I start with drawing an histogram with matplotlib. But I would like to have a curve (a "smooth histogram") representing the density (which is not a usual function). Something with kernel density estimates should be possible. Do you have any idea ? Thank you very much Charles From robert.kern at gmail.com Fri Jan 25 13:59:12 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 25 Jan 2008 12:59:12 -0600 Subject: [SciPy-user] Strange fortran (g95) build error on Mac OS X - not finding fortran compiler In-Reply-To: <799406d60801251039l477ef7ck46a1edcae6ffc8f3@mail.gmail.com> References: <799406d60801211140k2961b966jf93095039cf94468@mail.gmail.com> <799406d60801251039l477ef7ck46a1edcae6ffc8f3@mail.gmail.com> Message-ID: <479A3180.3090604@gmail.com> Adam Mercer wrote: > On Jan 21, 2008 2:40 PM, Adam Mercer wrote: > >> I'm running into a strange problem trying to build scipy-0.6.0 using >> the g95-0.90 fortran compiler (from http://www.g95.org) on Mac OS X. > > following up on this, I've managed to get past this error but am now > running into the following problem: > > /opt/local/bin/g95 -shared -shared > build/temp.macosx-10.3-i386-2.5/build/src.macosx-10.3-i386-2.5/scipy/fftpack/_fftpackmodule.o > build/temp.macosx-10.3-i386-2.5/scipy/fftpack/src/zfft.o > build/temp.macosx-10.3-i386-2.5/scipy/fftpack/src/drfft.o > build/temp.macosx-10.3-i386-2.5/scipy/fftpack/src/zrfft.o > build/temp.macosx-10.3-i386-2.5/scipy/fftpack/src/zfftnd.o > build/temp.macosx-10.3-i386-2.5/build/src.macosx-10.3-i386-2.5/fortranobject.o > -L/opt/local/lib -Lbuild/temp.macosx-10.3-i386-2.5 -ldfftpack -lfftw3 > -o build/lib.macosx-10.3-i386-2.5/scipy/fftpack/_fftpack.so > g95: unrecognized option '-shared' > g95: unrecognized option '-shared' > Undefined symbols: > "_PyExc_AttributeError", referenced from: > _PyExc_AttributeError$non_lazy_ptr in fortranobject.o > "_PyObject_Str", referenced from: > _array_from_pyobj in fortranobject.o > "_PyArg_ParseTupleAndKeywords", referenced from: > _f2py_rout__fftpack_zfft in _fftpackmodule.o > _f2py_rout__fftpack_drfft in _fftpackmodule.o > _f2py_rout__fftpack_zrfft in _fftpackmodule.o > _f2py_rout__fftpack_zfftnd in _fftpackmodule.o > _f2py_rout__fftpack_destroy_zfft_cache in _fftpackmodule.o > _f2py_rout__fftpack_destroy_zfftnd_cache in _fftpackmodule.o > _f2py_rout__fftpack_destroy_drfft_cache in _fftpackmodule.o > "_PyExc_ValueError", referenced from: > _PyExc_ValueError$non_lazy_ptr in fortranobject.o > "_PyExc_TypeError", referenced from: > _PyExc_TypeError$non_lazy_ptr in fortranobject.o > "_PyDict_GetItemString", referenced from: > _fortran_getattr in fortranobject.o > "_PyCObject_AsVoidPtr", referenced from: > _init_fftpack in _fftpackmodule.o > "_Py_BuildValue", referenced from: > _f2py_rout__fftpack_zfft in _fftpackmodule.o > _f2py_rout__fftpack_drfft in _fftpackmodule.o > _f2py_rout__fftpack_zrfft in _fftpackmodule.o > _f2py_rout__fftpack_zfftnd in _fftpackmodule.o > _f2py_rout__fftpack_destroy_zfft_cache in _fftpackmodule.o > _f2py_rout__fftpack_destroy_zfftnd_cache in _fftpackmodule.o > _f2py_rout__fftpack_destroy_drfft_cache in _fftpackmodule.o > "_PyComplex_Type", referenced from: > _PyComplex_Type$non_lazy_ptr in _fftpackmodule.o > "_PyDict_New", referenced from: > _PyFortranObject_NewAsAttr in fortranobject.o > _fortran_setattr in fortranobject.o > _PyFortranObject_New in fortranobject.o > _PyFortranObject_New in fortranobject.o > "_PyDict_SetItemString", referenced from: > _init_fftpack in _fftpackmodule.o > _init_fftpack in _fftpackmodule.o > _init_fftpack in _fftpackmodule.o > _F2PyDict_SetItemString in fortranobject.o > _fortran_getattr in fortranobject.o > _fortran_getattr in fortranobject.o > _fortran_setattr in fortranobject.o > _PyFortranObject_New in fortranobject.o > "_PyType_Type", referenced from: > _PyType_Type$non_lazy_ptr in _fftpackmodule.o > "__PyObject_New", referenced from: > _PyFortranObject_NewAsAttr in fortranobject.o > _PyFortranObject_New in fortranobject.o > _PyFortranObject_New in fortranobject.o > "_PyInt_Type", referenced from: > _PyInt_Type$non_lazy_ptr in _fftpackmodule.o > "_PyString_FromString", referenced from: > _init_fftpack in _fftpackmodule.o > _init_fftpack in _fftpackmodule.o > _fortran_getattr in fortranobject.o > _fortran_getattr in fortranobject.o > "_PyErr_Occurred", referenced from: > _int_from_pyobj in _fftpackmodule.o > _f2py_rout__fftpack_zfft in _fftpackmodule.o > _f2py_rout__fftpack_zfft in _fftpackmodule.o > _f2py_rout__fftpack_drfft in _fftpackmodule.o > _f2py_rout__fftpack_drfft in _fftpackmodule.o > _f2py_rout__fftpack_zrfft in _fftpackmodule.o > _f2py_rout__fftpack_zrfft in _fftpackmodule.o > _f2py_rout__fftpack_zfftnd in _fftpackmodule.o > _f2py_rout__fftpack_zfftnd in _fftpackmodule.o > _f2py_rout__fftpack_zfftnd in _fftpackmodule.o > _f2py_rout__fftpack_zfftnd in _fftpackmodule.o > _f2py_rout__fftpack_destroy_zfft_cache in _fftpackmodule.o > _f2py_rout__fftpack_destroy_zfftnd_cache in _fftpackmodule.o > _f2py_rout__fftpack_destroy_drfft_cache in _fftpackmodule.o > _init_fftpack in _fftpackmodule.o > _F2PyDict_SetItemString in fortranobject.o > "_PyErr_NewException", referenced from: > _init_fftpack in _fftpackmodule.o > "_PyImport_ImportModule", referenced from: > _init_fftpack in _fftpackmodule.o > "_PyMem_Free", referenced from: > _fortran_dealloc in fortranobject.o > _fortran_dealloc in fortranobject.o > "_MAIN_", referenced from: > _main in libf95.a(main.o) > "_PyCObject_Type", referenced from: > _PyCObject_Type$non_lazy_ptr in _fftpackmodule.o > "_PyExc_ImportError", referenced from: > _PyExc_ImportError$non_lazy_ptr in _fftpackmodule.o > "_PyErr_Format", referenced from: > _init_fftpack in _fftpackmodule.o > _fortran_call in fortranobject.o > _fortran_call in fortranobject.o > "_PyNumber_Int", referenced from: > _int_from_pyobj in _fftpackmodule.o > "_PyCObject_FromVoidPtr", referenced from: > _fortran_getattr in fortranobject.o > "_PyObject_GetAttrString", referenced from: > _int_from_pyobj in _fftpackmodule.o > _init_fftpack in _fftpackmodule.o > "_PyErr_Print", referenced from: > _init_fftpack in _fftpackmodule.o > _F2PyDict_SetItemString in fortranobject.o > "_PyString_Type", referenced from: > _PyString_Type$non_lazy_ptr in _fftpackmodule.o > "__Py_NoneStruct", referenced from: > __Py_NoneStruct$non_lazy_ptr in _fftpackmodule.o > __Py_NoneStruct$non_lazy_ptr in fortranobject.o > "_Py_FindMethod", referenced from: > _fortran_getattr in fortranobject.o > "_PyString_ConcatAndDel", referenced from: > _fortran_getattr in fortranobject.o > "_PyErr_Clear", referenced from: > _int_from_pyobj in _fftpackmodule.o > _F2PyDict_SetItemString in fortranobject.o > "_Py_InitModule4", referenced from: > _init_fftpack in _fftpackmodule.o > "_PyModule_GetDict", referenced from: > _init_fftpack in _fftpackmodule.o > "_PyExc_RuntimeError", referenced from: > _PyExc_RuntimeError$non_lazy_ptr in _fftpackmodule.o > _PyExc_RuntimeError$non_lazy_ptr in fortranobject.o > "_PyDict_DelItemString", referenced from: > _fortran_setattr in fortranobject.o > "_PyObject_Type", referenced from: > _array_from_pyobj in fortranobject.o > "_PySequence_Check", referenced from: > _int_from_pyobj in _fftpackmodule.o > "_PyString_AsString", referenced from: > _array_from_pyobj in fortranobject.o > "_PySequence_GetItem", referenced from: > _int_from_pyobj in _fftpackmodule.o > "_PyErr_SetString", referenced from: > _int_from_pyobj in _fftpackmodule.o > _f2py_rout__fftpack_zfft in _fftpackmodule.o > _f2py_rout__fftpack_zfft in _fftpackmodule.o > _f2py_rout__fftpack_zfft in _fftpackmodule.o > _f2py_rout__fftpack_drfft in _fftpackmodule.o > _f2py_rout__fftpack_drfft in _fftpackmodule.o > _f2py_rout__fftpack_drfft in _fftpackmodule.o > _f2py_rout__fftpack_zrfft in _fftpackmodule.o > _f2py_rout__fftpack_zrfft in _fftpackmodule.o > _f2py_rout__fftpack_zrfft in _fftpackmodule.o > _f2py_rout__fftpack_zfftnd in _fftpackmodule.o > _f2py_rout__fftpack_zfftnd in _fftpackmodule.o > _f2py_rout__fftpack_zfftnd in _fftpackmodule.o > _f2py_rout__fftpack_zfftnd in _fftpackmodule.o > _f2py_rout__fftpack_zfftnd in _fftpackmodule.o > _init_fftpack in _fftpackmodule.o > _array_from_pyobj in fortranobject.o > _fortran_setattr in fortranobject.o > _fortran_setattr in fortranobject.o > "_PyType_IsSubtype", referenced from: > _int_from_pyobj in _fftpackmodule.o > _int_from_pyobj in _fftpackmodule.o > _int_from_pyobj in _fftpackmodule.o > _array_from_pyobj in fortranobject.o > ld: symbol(s) not found > > Any ideas? No one has implemented the correct link flags for the g95 compiler on the OS X platform. They are probably similar to those required for gfortran. Look in numpy/fcompiler/gnu.py for the gfortran implementation (Gnu95FCompiler and its superclass GnuFCompiler) to port it to numpy/fcompiler/g95.py. Also double-check if you have the environment variable LDFLAGS set. It overrides the link flags entirely, including those added by Python itself to link to the Python framework. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Jan 25 14:01:35 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 25 Jan 2008 13:01:35 -0600 Subject: [SciPy-user] Draw a density line on an histogram In-Reply-To: <200801251952.17261.charles.vejnar@isb-sib.ch> References: <200801251952.17261.charles.vejnar@isb-sib.ch> Message-ID: <479A320F.3000803@gmail.com> Charles Vejnar wrote: > Hi, > > I have a series of numbers, and I want to have an idea about their > distribution. I start with drawing an histogram with matplotlib. > > But I would like to have a curve (a "smooth histogram") representing the > density (which is not a usual function). > > Something with kernel density estimates should be possible. Do you have any > idea ? In [6]: from numpy import * In [7]: from scipy.stats import gaussian_kde In [8]: d = random.standard_normal(1000) In [9]: k = gaussian_kde(d) In [10]: d.min(), d.max() Out[10]: (-2.7279408369776075, 3.210698358446658) In [11]: x = linspace(-3.5, 3.5, 100) In [12]: y = k(x) Now plot y versus x. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ramercer at gmail.com Fri Jan 25 14:15:31 2008 From: ramercer at gmail.com (Adam Mercer) Date: Fri, 25 Jan 2008 14:15:31 -0500 Subject: [SciPy-user] Strange fortran (g95) build error on Mac OS X - not finding fortran compiler In-Reply-To: <479A3180.3090604@gmail.com> References: <799406d60801211140k2961b966jf93095039cf94468@mail.gmail.com> <799406d60801251039l477ef7ck46a1edcae6ffc8f3@mail.gmail.com> <479A3180.3090604@gmail.com> Message-ID: <799406d60801251115jfb41f8ch4e9d0a4a2873a303@mail.gmail.com> On Jan 25, 2008 1:59 PM, Robert Kern wrote: > No one has implemented the correct link flags for the g95 compiler on the OS X > platform. They are probably similar to those required for gfortran. Look in > numpy/fcompiler/gnu.py for the gfortran implementation (Gnu95FCompiler and its > superclass GnuFCompiler) to port it to numpy/fcompiler/g95.py. Also double-check > if you have the environment variable LDFLAGS set. It overrides the link flags > entirely, including those added by Python itself to link to the Python framework. Thanks Robert, I'll take a look. Cheers Adam From timmichelsen at gmx-topmail.de Fri Jan 25 16:28:32 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Fri, 25 Jan 2008 22:28:32 +0100 Subject: [SciPy-user] timeseries import error In-Reply-To: References: <200801181553.13213.w.richert@gmx.net> <91cf711d0801180909q7babc52dk9cd19529f888e876@mail.gmail.com> <47979E3C.9050805@gmx-topmail.de> <91cf711d0801240650o25521059lb8d998033a9c8f5d@mail.gmail.com> Message-ID: Thanks for your clarifications. > I won't be providing any binaries until there has been an official release of > numpy using the new version of the masked array module. I can't provide a > timeline for that because I am not really involved in that process. But keep > your eyes peeled for discussion on a numpy 1.0.5 release since that it is when > it is likely to happen from my understanding. So sometime in March or April? I couldn't find a release plan... > As far as ubuntu binaries... I'm a die hard windows guy so I won't be of any > help here. Interesting. I found using Python on windows difficult because there is not yet any package manager which can install all needed FOSS packaged like apt/rpm/fink. As you know from my previous postings, I had difficulties to prepare windows binaries. I have to use Windows at work... Therefore, you affiliation with this OS would be of use here. > It should be extremely simple to build on ubuntu though since linux > comes setup with compilers and there are no external dependencies for the > timeseries module outside of numpy and scipy itself (and scipy is only a run > time requirement, not needed for building it). Yes, it is. I was able to build the version from the sandbox svn just with checkinstall python ./setup.py install When we are ready for binary releases I might even try to prepare a official Ubuntu package. Timmie From Nami.Mowlavi at obs.unige.ch Fri Jan 25 16:47:31 2008 From: Nami.Mowlavi at obs.unige.ch (Nami Mowlavi) Date: Fri, 25 Jan 2008 21:47:31 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?Installation_on_Intel_OS_X=2E5_leopard=3A_?= =?utf-8?q?libcc=5Fdynamic_problem?= Message-ID: Hello, I try to compile on an Intel mac osx.5 macBook pro, following the guidelines from http://www.scipy.org/Installing_SciPy/Mac_OS_X, but with the package downloaded from svn at svn co http://svn.scipy.org/svn/scipy/trunk scipy The commands for my installation are: export MACOSX_DEPLOYMENT_TARGET=10.4 python setup.py build_src build_clib --fcompiler=gnu95 build_ext --fcompiler=gnu95 build sudo python setup.py install I get the following error message: ld: library not found for -lcc_dynamic Below are more messages. Does anybody know how to proceed? Thanks, Nami ------ gcc: scipy/fftpack/src/zrfft.c gcc: scipy/fftpack/src/zfft.c gcc: scipy/fftpack/src/zfftnd.c /usr/local/bin/g77 -g -Wall -g -Wall -undefined dynamic_lookup -bundle build/temp.macosx-10.3- fat-2.5/build/src.macosx-10.3-fat-2.5/scipy/fftpack/_fftpackmodule.o build/temp.macosx-10.3- fat-2.5/scipy/fftpack/src/zfft.o build/temp.macosx-10.3-fat -2.5/scipy/fftpack/src/drfft.o build/temp.macosx-10.3-fat-2.5/scipy/fftpack/src/zrfft.o build/temp.macosx-10.3-fat- 2.5/scipy/fftpack/src/zfftnd.o build/temp.macosx-10.3-fat -2.5/build/src.macosx-10.3-fat- 2.5/fortranobject.o -L/usr/local/lib -L/usr/local/lib/gcc/i686 -apple-darwin8.8.1/3.4.0 - Lbuild/temp.macosx-10.3-fat-2.5 -ldfftpack -lfftw3 -lg2c -lcc_dynamic -o build/lib.macosx-10.3- fat-2.5/scipy/fftpack/_fftpack.so ld: library not found for -lcc_dynamic collect2: ld a retourn? 1 code d'?tat d'ex?cution ld: library not found for -lcc_dynamic collect2: ld a retourn? 1 code d'?tat d'ex?cution error: Command "/usr/local/bin/g77 -g -Wall -g -Wall -undefined dynamic_lookup -bundle build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat -2.5/scipy/fftpack/_fftpackmodule.o build/temp.macosx-10.3-fat-2.5/scipy/fftpack/src/zfft.o build/temp.macosx-10.3-fat- 2.5/scipy/fftpack/src/drfft.o build/temp.macosx-10.3-fat- 2.5/scipy/fftpack/src/zrfft.o build/temp.macosx-10.3-fat-2.5/scipy/fftpack/src/zfftnd.o build/temp.macosx-10.3-fat- 2.5/build/src.macosx-10.3-fat-2.5/fortranobject.o -L/usr/local/lib -L/usr/local/lib/gcc/i686-apple- darwin8.8.1/3.4.0 -Lbuild/temp.macosx-10.3-fat-2.5 -ldfftpack -lfftw3 -lg2c -lcc_dynamic -o build/lib.macosx-10.3-fat-2.5/scipy/fftpack/_fftpack.so" failed with exit status 1 From robert.kern at gmail.com Fri Jan 25 16:59:24 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 25 Jan 2008 15:59:24 -0600 Subject: [SciPy-user] Installation on Intel OS X.5 leopard: libcc_dynamic problem In-Reply-To: References: Message-ID: <479A5BBC.7030107@gmail.com> Nami Mowlavi wrote: > Hello, > I try to compile on an Intel mac osx.5 macBook pro, following > the guidelines from http://www.scipy.org/Installing_SciPy/Mac_OS_X, > but with the package downloaded > from svn at > svn co http://svn.scipy.org/svn/scipy/trunk scipy > > The commands for my installation are: > > export MACOSX_DEPLOYMENT_TARGET=10.4 > python setup.py build_src build_clib --fcompiler=gnu95 build_ext > --fcompiler=gnu95 build > sudo python setup.py install > > I get the following error message: > > ld: library not found for -lcc_dynamic > > Below are more messages. Does anybody know how to proceed? You cannot use g77 with Python universal binaries. You need to use gfortran, as mentioned on the web page. You can get a binary from here: http://r.research.att.com/tools/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dineshbvadhia at hotmail.com Fri Jan 25 22:59:01 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Fri, 25 Jan 2008 19:59:01 -0800 Subject: [SciPy-user] Scipy Sparse Library Message-ID: We built a working C++ program using the SparseLib++ library last year. We are now creating a Python version of the program using the Scipy sparse library (the csc matrix format) and the results of the program are very different to the C++ program. It's either the Python program or the data. We have checked the programs and the data and cannot identify any discrepencies. Is it possible that the Scipy sparse libraries behave differently from the SparseLib++ library? I would have said unlikely but I guess anything is possible. Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Sat Jan 26 09:07:59 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 26 Jan 2008 08:07:59 -0600 Subject: [SciPy-user] Scipy Sparse Library In-Reply-To: References: Message-ID: On Jan 25, 2008 9:59 PM, Dinesh B Vadhia wrote: > > We built a working C++ program using the SparseLib++ library last year. We > are now creating a Python version of the program using the Scipy sparse > library (the csc matrix format) and the results of the program are very > different to the C++ program. It's either the Python program or the data. > We have checked the programs and the data and cannot identify any > discrepencies. > > Is it possible that the Scipy sparse libraries behave differently from the > SparseLib++ library? I would have said unlikely but I guess anything is > possible. Can you tell us what sparse functionality you are using and which version of SciPy you have? Can you produce a short script that demonstrates the problem? SciPy sparse arithmetic should produce the same results as SparseLib++. The only case when this may not be true is when the entries of your matrix have drastically varying values (e.g. 1e-10 vs 1e10) and the lack of associativity in floating math produces different results. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From ramercer at gmail.com Sat Jan 26 10:32:25 2008 From: ramercer at gmail.com (Adam Mercer) Date: Sat, 26 Jan 2008 10:32:25 -0500 Subject: [SciPy-user] scipy.lib.tests.test_blas.test_fblas1_simple failure on Mac OS X Message-ID: <799406d60801260732i2d1e2304y18fffec5a8c3a21b@mail.gmail.com> Hi Running scipy.test() on Mac OS X Leopard, numpy-1.0.4 & scipy-0.6.0, results in the following failure: FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/local/lib/python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/opt/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 158, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 1.294297555666517e-37j DESIRED: (-9+2j) Is this anything to worry about? Cheers Adam From dineshbvadhia at hotmail.com Sat Jan 26 15:34:17 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Sat, 26 Jan 2008 12:34:17 -0800 Subject: [SciPy-user] Scipy Sparse Library Message-ID: Nathan: Our version of SciPy is 0.60.win32-py2.5. We are solving b = Ax, where b and x are 'float' vectors, A is an MxN sparse matrix (with M != N) with binary (ie. 0 or 1) entries, and is defined as: A = scipy.asmatrix(scipy.zeros((M, N), float)) # 'float' because byte int not supported yet A is populated. We then use the following SciPy statement to transform into a sparse matrix in csc format: A = scipy.sparse.csc_matrix(A) # scipy sparse csc matrix version Next, calculate b as: b = A * x We don't have drastically varying values. Cheers! Dinesh ------------------------------ Message: 6 Date: Sat, 26 Jan 2008 08:07:59 -0600 From: "Nathan Bell" Subject: Re: [SciPy-user] Scipy Sparse Library To: "SciPy Users List" Message-ID: Content-Type: text/plain; charset=ISO-8859-1 On Jan 25, 2008 9:59 PM, Dinesh B Vadhia wrote: > > We built a working C++ program using the SparseLib++ library last year. We > are now creating a Python version of the program using the Scipy sparse > library (the csc matrix format) and the results of the program are very > different to the C++ program. It's either the Python program or the data. > We have checked the programs and the data and cannot identify any > discrepencies. > > Is it possible that the Scipy sparse libraries behave differently from the > SparseLib++ library? I would have said unlikely but I guess anything is > possible. Can you tell us what sparse functionality you are using and which version of SciPy you have? Can you produce a short script that demonstrates the problem? SciPy sparse arithmetic should produce the same results as SparseLib++. The only case when this may not be true is when the entries of your matrix have drastically varying values (e.g. 1e-10 vs 1e10) and the lack of associativity in floating math produces different results. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Sun Jan 27 00:18:02 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 27 Jan 2008 14:18:02 +0900 Subject: [SciPy-user] scipy.lib.tests.test_blas.test_fblas1_simple failure on Mac OS X In-Reply-To: <799406d60801260732i2d1e2304y18fffec5a8c3a21b@mail.gmail.com> References: <799406d60801260732i2d1e2304y18fffec5a8c3a21b@mail.gmail.com> Message-ID: <479C140A.6040400@ar.media.kyoto-u.ac.jp> Adam Mercer wrote: > Hi > > Running scipy.test() on Mac OS X Leopard, numpy-1.0.4 & scipy-0.6.0, > results in the following failure: > > FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/opt/local/lib/python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", > line 76, in check_dot > assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) > File "/opt/local/lib/python2.5/site-packages/numpy/testing/utils.py", > line 158, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > ACTUAL: 1.294297555666517e-37j > DESIRED: (-9+2j) > > Is this anything to worry about? > This is likely a symptom of bad argument passing between C and Fortran. Normally, the problem is supposed to be solved. But maybe leopard has different convention. http://projects.scipy.org/scipy/scipy/ticket/238 cheers, David From ndbecker2 at gmail.com Sun Jan 27 06:27:40 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Sun, 27 Jan 2008 06:27:40 -0500 Subject: [SciPy-user] optimization advice needed Message-ID: I have an optimization problem that doesn't quite fit in the usual framework. The problem is to minimize the mean-square-error between a sequence of noisy observations and a model. Let's suppose there are 2 parameters in the model: (a,b) So we observe g = f(a,b) + n. Assume all I know about the problem is it is probably convex. Now a couple of things are unusual: 1) The problem is not to optimize the estimates (a',b') one time - it is more of an optimal control problem. (a,b) are slowly varying, and we want to continuously refine the estimates. 2) We want an inversion of the usual control. Rather than having the optimization algorithm call my function, I need my function to call the optimization. Specifically I will generate one _new_ random vector of observations. Then I want to perform one iteration of the optimization on this observation. (In the past, I have adapted the simplex algorithm to work this way). So, any advice on how to proceed? From emanuele at relativita.com Sun Jan 27 06:51:22 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Sun, 27 Jan 2008 12:51:22 +0100 Subject: [SciPy-user] openopt: which NLP solver? In-Reply-To: <4799CC3A.6070802@scipy.org> References: <4799BB04.4010600@relativita.com> <4799CC3A.6070802@scipy.org> Message-ID: <479C703A.8050902@relativita.com> First of all thanks a lot Dmitrey. Your advices are very valuable. Other comments below, inline. dmitrey wrote: > ... > as well as many other solvers, including even ALGENCAN. Maybe I 'll > provide native OO handling of the situation but, on the other hand, > usually it means something incorrect with your own funcs, for example it > has lots of local minima or incorrect gradients. > There are fair chances that there are many local minima (at least "some" are present for sure) and some troubles with gradients too :) >> - it is possible that the evaluation of my function and gradient suffers >> some numerical instabilities >> >> > you should investigate is it so or not. Using p.check.df=1 could be very > helpful (see openopt doc page, "auto check derivatives" chapter). If > your 1st derivatives really have instabilities (noise, non-smooth - for > example, using abs(...)) - then only ralg can handle the problem. On the > other hand, current OpenOpt ralg implementation is still far from > perfect (at least from ralg fortran version by our dept). However, ralg > is for medium-scale problems with nVars up to ~1000, not 10000 as you > have. It handles matrix b of shape(nVars,nVars) in memory, and requires > 4..5*nVars^2 multiplication operations each iter. > If no problems with smooth and noise and 1st derivatives obtain, I would > recommend you scipy_lbfgsb, ALGENCAN, scipy_tnc, maybe scipy_slsqp; > lincher also can serve, but it's very primitive. Here's full list: > http://scipy.org/scipy/scikits/wiki/NLP > > Very interesting. I'll investigate as you suggest. I'll work on the 1000 variable problem and ralg in a short time. After that I'll try the other solvers you suggested. >> If someone (dmitrey?) could help selecting most appropriate solver >> in openopt it would be much appreciated. >> In the meawhile I'll try 'ralg'. >> >> >> Thanks in advance, >> >> Emanuele >> >> P.S.: I'm having some troubles building openopt in ubuntu gutsy. >> "sudo python setup.py install" works but "python setup.py build" >> does not, requiring a previously installed "scikits" package. How >> can I install openopt in a custom path instead of /usr/local/lib/... ? >> >> > Try using "python setup.py", it must ask you what to do (cho0se install) > and destination (choose your own) > Right, I'm just trying now. Cool :) >> >> [0]: by the way fmin_cg does not handle constraints, at least in >> standard scipy, which forced me to use abs() in many places. This >> could be a source of instabilities when computing gradient. >> >> > I guess you use very obsolete OpenOpt version. > I had implemented (ling time ago) yielding error message > "the solver scipy_cg cannot handle 'lb' constraints" > (at least I get the one with trying to pass lb constraints to scipy_cg) > Of course, since scipy.optimize.fmin_cg can't handle constraints, so > does OO scipy_cg. > > The 'fmin_cg' I'm using is the standard scipy's one (scipy-0.5.2, as in ubuntu gutsy 7.10), not yours in openopt. I'll use yours now. Thanks again! Emanuele From dmitrey.kroshko at scipy.org Sun Jan 27 07:37:59 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sun, 27 Jan 2008 14:37:59 +0200 Subject: [SciPy-user] optimization advice needed In-Reply-To: References: Message-ID: <479C7B27.5000705@scipy.org> I guess you should specify your problem more exactly in mathematical terms. Does it belong to ordinary LSP, or maybe it's better to consider as AR, ARX, ARMA, ARMAX? Does previous a, b values affect next? Why couldn't you just solve the problem for each new vector of observations? Does the number of observations sufficiently more than number of vars (= num(a,b)=2)? Is f(a,b) convex? Does the noise really has Gaussian distribution? If no, least squares can be not a best decision. Would you answer the questions, it will be easier for others to give advices. As for me, I'm not skilled enough in optimal control problems to comment this. Regards, D. Neal Becker wrote: > I have an optimization problem that doesn't quite fit in the usual > framework. > > The problem is to minimize the mean-square-error between a sequence of noisy > observations and a model. > > Let's suppose there are 2 parameters in the model: (a,b) > So we observe g = f(a,b) + n. > > Assume all I know about the problem is it is probably convex. > > Now a couple of things are unusual: > 1) The problem is not to optimize the estimates (a',b') one time - it is > more of an optimal control problem. (a,b) are slowly varying, and we want > to continuously refine the estimates. > > 2) We want an inversion of the usual control. Rather than having the > optimization algorithm call my function, I need my function to call the > optimization. Specifically I will generate one _new_ random vector of > observations. Then I want to perform one iteration of the optimization on > this observation. (In the past, I have adapted the simplex algorithm to > work this way). > > So, any advice on how to proceed? > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From ndbecker2 at gmail.com Sun Jan 27 08:06:53 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Sun, 27 Jan 2008 08:06:53 -0500 Subject: [SciPy-user] optimization advice needed References: <479C7B27.5000705@scipy.org> Message-ID: Let me try to address the questions (to the extent that I know the answers): I believe f(a,b) is convex. Why not just iteratively solve for each observation? 2 reasons. First, because it is too computationally expensive in this application. Second, because each observation is noisy, so you don't want to adapt the model to optimize just one observation. I think Gaussian noise is a good model here. For the first question. The values of the parameters, which I called (a,b), but in general we might need more variables, are more-or-less constant over time, but change very slowly with respect to the observations. I guess you're asking if there is a model for how they change over time? No, only that they change very slowly. dmitrey wrote: > I guess you should specify your problem more exactly in mathematical > terms. Does it belong to ordinary LSP, or maybe it's better to consider as > AR, ARX, ARMA, ARMAX? Does previous a, b values affect next? > Why couldn't you just solve the problem for each new vector of > observations? Does the number of observations sufficiently more than > number of vars (= num(a,b)=2)? > Is f(a,b) convex? > Does the noise really has Gaussian distribution? If no, least squares > can be not a best decision. > Would you answer the questions, it will be easier for others to give > advices. > As for me, I'm not skilled enough in optimal control problems to comment > this. > Regards, D. > > > Neal Becker wrote: >> I have an optimization problem that doesn't quite fit in the usual >> framework. >> >> The problem is to minimize the mean-square-error between a sequence of >> noisy observations and a model. >> >> Let's suppose there are 2 parameters in the model: (a,b) >> So we observe g = f(a,b) + n. >> >> Assume all I know about the problem is it is probably convex. >> >> Now a couple of things are unusual: >> 1) The problem is not to optimize the estimates (a',b') one time - it is >> more of an optimal control problem. (a,b) are slowly varying, and we >> want to continuously refine the estimates. >> >> 2) We want an inversion of the usual control. Rather than having the >> optimization algorithm call my function, I need my function to call the >> optimization. Specifically I will generate one _new_ random vector of >> observations. Then I want to perform one iteration of the optimization >> on >> this observation. (In the past, I have adapted the simplex algorithm to >> work this way). >> >> So, any advice on how to proceed? >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> From cmutel at gmail.com Sun Jan 27 08:20:22 2008 From: cmutel at gmail.com (Christopher Mutel) Date: Sun, 27 Jan 2008 14:20:22 +0100 Subject: [SciPy-user] Sparse matrices and array multiplication Message-ID: <5e5978e10801270520o7cf64420y29966b1235b2093a@mail.gmail.com> Hello all- I hesitate to ask what may be a decidedly simple question, but is it somehow possible to do array multiplication (the way NumPy arrays are multiplied) with the sparse matrix classes defined in SciPy? I have tried searching the the NumPy book & on-line documentation, and the mailing list, but haven't yet found a way. By array multiplication, I mean: a = [a1, a2] b = [[b1, b2], [b3, b4]] a*b = [[a1*b1, a2*b2], [a1*b3, a2*b4]] I am using relatively large arrays, about 1500*4000, with a 1-d multiplying vector of length 1500, but with only about 1-2% coverage in both, so populating NumPy arrays means lots of wasted space, or even exhausts available memory when considering ~20 arrays simultaneously. On possible approach would be to slice the arrays to remove empty rows and columns, using a dictionary to match new indices to the old indices, but probably there is some magic in the Scipy library that is cleverer and faster. Any help, or suggestions for further investigation, would be greatly appreciated. -Chris Mutel -- ############################ Chris Mutel ?kologisches Systemdesign - Ecological Systems Design Institut f.Umweltingenieurwissenschaften - Institute for Environmental Engineering ETH Z?rich - HIF C 42 - Schafmattstr. 6 8093 Z?rich Telefon: +41 44 633 71 45 - Fax: +41 44 633 10 61 ############################ From dmitrey.kroshko at scipy.org Sun Jan 27 09:14:34 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sun, 27 Jan 2008 16:14:34 +0200 Subject: [SciPy-user] optimization advice needed In-Reply-To: References: <479C7B27.5000705@scipy.org> Message-ID: <479C91CA.2000807@scipy.org> You should specify what affects more to solution a*,b*: either 1) other parameters, or 2) previous a*[k-1],b*[k-1] solution, or 3) a mix of 1st and 2nd if 2) or 3), something like ARMAX should be used. As for computational expense, since your a*,b* not differs significantly from a*[k-1],b*[k-1], you could just set x0 = [a*[k-1],b*[k-1]] (i.e. start from previous solution) Regards, D. Neal Becker wrote: > Let me try to address the questions (to the extent that I know the answers): > I believe f(a,b) is convex. > Why not just iteratively solve for each observation? 2 reasons. First, > because it is too computationally expensive in this application. Second, > because each observation is noisy, so you don't want to adapt the model to > optimize just one observation. > I think Gaussian noise is a good model here. > > For the first question. The values of the parameters, which I called (a,b), > but in general we might need more variables, are more-or-less constant over > time, but change very slowly with respect to the observations. I guess > you're asking if there is a model for how they change over time? No, only > that they change very slowly. > > dmitrey wrote: > > >> I guess you should specify your problem more exactly in mathematical >> terms. Does it belong to ordinary LSP, or maybe it's better to consider as >> AR, ARX, ARMA, ARMAX? Does previous a, b values affect next? >> Why couldn't you just solve the problem for each new vector of >> observations? Does the number of observations sufficiently more than >> number of vars (= num(a,b)=2)? >> Is f(a,b) convex? >> Does the noise really has Gaussian distribution? If no, least squares >> can be not a best decision. >> Would you answer the questions, it will be easier for others to give >> advices. >> As for me, I'm not skilled enough in optimal control problems to comment >> this. >> Regards, D. >> >> >> Neal Becker wrote: >> >>> I have an optimization problem that doesn't quite fit in the usual >>> framework. >>> >>> The problem is to minimize the mean-square-error between a sequence of >>> noisy observations and a model. >>> >>> Let's suppose there are 2 parameters in the model: (a,b) >>> So we observe g = f(a,b) + n. >>> >>> Assume all I know about the problem is it is probably convex. >>> >>> Now a couple of things are unusual: >>> 1) The problem is not to optimize the estimates (a',b') one time - it is >>> more of an optimal control problem. (a,b) are slowly varying, and we >>> want to continuously refine the estimates. >>> >>> 2) We want an inversion of the usual control. Rather than having the >>> optimization algorithm call my function, I need my function to call the >>> optimization. Specifically I will generate one _new_ random vector of >>> observations. Then I want to perform one iteration of the optimization >>> on >>> this observation. (In the past, I have adapted the simplex algorithm to >>> work this way). >>> >>> So, any advice on how to proceed? >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>> >>> > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From wnbell at gmail.com Sun Jan 27 10:23:32 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 27 Jan 2008 09:23:32 -0600 Subject: [SciPy-user] Sparse matrices and array multiplication In-Reply-To: <5e5978e10801270520o7cf64420y29966b1235b2093a@mail.gmail.com> References: <5e5978e10801270520o7cf64420y29966b1235b2093a@mail.gmail.com> Message-ID: On Jan 27, 2008 7:20 AM, Christopher Mutel wrote: > By array multiplication, I mean: > a = [a1, a2] > b = [[b1, b2], [b3, b4]] > a*b = [[a1*b1, a2*b2], [a1*b3, a2*b4]] > > I am using relatively large arrays, about 1500*4000, with a 1-d > multiplying vector of length 1500, but with only about 1-2% coverage > in both, so populating NumPy arrays means lots of wasted space, or > even exhausts available memory when considering ~20 arrays > simultaneously. To do this with sparse matrices you can put the entries of 'a' on the diagonal of a matrix and use it to scale the columns of b. from scipy.sparse import * a = spdiag( [a], [0], 2, 2) # a is now [[a1,0],[0,a2]] in sparse format b*a #should be [[a1*b1, a2*b2], [a1*b3, a2*b4]] In practice, (a * b.T).T may be faster here, so you might try that one too. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From dmitrey.kroshko at scipy.org Sun Jan 27 13:14:12 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sun, 27 Jan 2008 20:14:12 +0200 Subject: [SciPy-user] optimization advice needed In-Reply-To: References: <479C7B27.5000705@scipy.org> Message-ID: <479CC9F4.2060205@scipy.org> Another one approach could be using ARMAX to estimate initial x0 - maybe, it will be closer to x*[k] than just previous solution x*[k-1]. I guess your model order will be not too big, 2-3 previous steps will be enough, so it will not yield sufficient computational expense. D. Neal Becker wrote: > Let me try to address the questions (to the extent that I know the answers): > I believe f(a,b) is convex. > Why not just iteratively solve for each observation? 2 reasons. First, > because it is too computationally expensive in this application. Second, > because each observation is noisy, so you don't want to adapt the model to > optimize just one observation. > I think Gaussian noise is a good model here. > > For the first question. The values of the parameters, which I called (a,b), > but in general we might need more variables, are more-or-less constant over > time, but change very slowly with respect to the observations. I guess > you're asking if there is a model for how they change over time? No, only > that they change very slowly. > From ilmar at wilbers.no Sun Jan 27 14:52:49 2008 From: ilmar at wilbers.no (Ilmar Wilbers) Date: Sun, 27 Jan 2008 19:52:49 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?Installation_on_Intel_OS_X=2E5_leopard=3A_?= =?utf-8?q?libcc=5Fdynamic_problem?= References: <479A5BBC.7030107@gmail.com> Message-ID: Robert Kern gmail.com> writes: > > You cannot use g77 with Python universal binaries. You need to use > gfortran, as > mentioned on the web page. You can get a binary from here: > > http://r.research.att.com/tools/ Hi, If I remove the -lcc_dynamic flag manually (edit the numpy.distutils source code), I get this to work jut fine. Isn't it possible to remove the lines 162-163 in numpy.distutils.fcompiler.gnu by using som flag? ilmar From robert.kern at gmail.com Sun Jan 27 15:20:34 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 27 Jan 2008 14:20:34 -0600 Subject: [SciPy-user] Installation on Intel OS X.5 leopard: libcc_dynamic problem In-Reply-To: References: <479A5BBC.7030107@gmail.com> Message-ID: <479CE792.4020407@gmail.com> Ilmar Wilbers wrote: > Robert Kern gmail.com> writes: > >> You cannot use g77 with Python universal binaries. You need to use >> gfortran, as >> mentioned on the web page. You can get a binary from here: >> >> http://r.research.att.com/tools/ > > wor > Hi, > > If I remove the -lcc_dynamic flag manually (edit the numpy.distutils > source code), I get this to work jut fine. Did you try to import a module with FORTRAN code? The problem that -lcc_dynamic solves only shows up then, not at build time. > Isn't it possible to remove the lines > 162-163 in numpy.distutils.fcompiler.gnu by using som flag? No. -lcc_dynamic is required for gcc 3.x, the only versions of gcc which g77 works with. You cannot use these with a Universal binary of Python. You need to use gfortran. We need to find out why you managed to pick up g77 instead of gfortran when you specific --fcompiler=gnu95. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gael.varoquaux at normalesup.org Sun Jan 27 16:50:56 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 27 Jan 2008 22:50:56 +0100 Subject: [SciPy-user] optimization advice needed In-Reply-To: References: Message-ID: <20080127215056.GA29809@phare.normalesup.org> On Sun, Jan 27, 2008 at 06:27:40AM -0500, Neal Becker wrote: > The problem is to minimize the mean-square-error between a sequence of noisy > observations and a model. > [...] > 1) The problem is not to optimize the estimates (a',b') one time - it is > more of an optimal control problem. (a,b) are slowly varying, and we want > to continuously refine the estimates. This sounds exactly what a Bayesian estimator is good at. I can give you a good explanation of Bayesian estimation because of lack of time, and the info on the net is definitely sparse (try wikipedia, but the article is hard to read). It is good to know that if your problem is linear, and your noise is Gaussian, you can use a Kalman filter (http://www.scipy.org/Cookbook/KalmanFiltering). HTH, Ga?l From ilmarw at simula.no Sun Jan 27 16:52:29 2008 From: ilmarw at simula.no (Ilmar Wilbers) Date: Sun, 27 Jan 2008 22:52:29 +0100 Subject: [SciPy-user] Error and fail on test Message-ID: <479CFD1D.8040104@simula.no> Hi, I have been builing scipy on Mac 10.5 with gcc 4.0.1 and linked it with BLAS and LAPACK. Installastion seems to be fine, but when I run scipy.test(1, 10), I get the following error and failure: ====================================================================== ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/scipy/io/tests/test_array_import.py", line 55, in check_integer from scipy import stats File "/Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/scipy/stats/__init__.py", line 7, in from stats import * File "/Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/scipy/stats/stats.py", line 191, in import scipy.special as special File "/Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/scipy/special/__init__.py", line 8, in from basic import * File "/Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/scipy/special/basic.py", line 8, in from _cephes import * ImportError: dlopen(/Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/scipy/special/_cephes.so, 2): Symbol not found: _do_lio Referenced from: /Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/scipy/special/_cephes.so Expected in: dynamic lookup ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/numpy/testing/utils.py", line 158, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 7.3632427319432975e-38j DESIRED: (-9+2j) ---------------------------------------------------------------------- Ran 1144 tests in 3.605s FAILED (failures=1, errors=1) For the error, I did as suggested in http://projects.scipy.org/pipermail/scipy-user/2007-April/011769.html and http://projects.scipy.org/pipermail/scipy-user/2007-April/011772.html For the failure, it seems that the patch from http://www.scipy.org/scipy/scipy/ticket/238 maybe is not fixed? What shoudl I check? I got the warning: WARNING: clapack module is empty further up. How should I proceed? It the compiler version (4.0.1) the problem here? ilmar From strawman at astraw.com Sun Jan 27 16:52:39 2008 From: strawman at astraw.com (Andrew Straw) Date: Sun, 27 Jan 2008 13:52:39 -0800 Subject: [SciPy-user] optimization advice needed In-Reply-To: References: Message-ID: <479CFD27.6060908@astraw.com> If f() is stationary and you are trying to estimate a and b, isn't this exactly the case of a Kalman filter for linear f()? And if f() is non-linear, there are extensions to the Kalman framework to handle this. -Andrew Neal Becker wrote: > I have an optimization problem that doesn't quite fit in the usual > framework. > > The problem is to minimize the mean-square-error between a sequence of noisy > observations and a model. > > Let's suppose there are 2 parameters in the model: (a,b) > So we observe g = f(a,b) + n. > > Assume all I know about the problem is it is probably convex. > > Now a couple of things are unusual: > 1) The problem is not to optimize the estimates (a',b') one time - it is > more of an optimal control problem. (a,b) are slowly varying, and we want > to continuously refine the estimates. > > 2) We want an inversion of the usual control. Rather than having the > optimization algorithm call my function, I need my function to call the > optimization. Specifically I will generate one _new_ random vector of > observations. Then I want to perform one iteration of the optimization on > this observation. (In the past, I have adapted the simplex algorithm to > work this way). > > So, any advice on how to proceed? > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From strawman at astraw.com Sun Jan 27 16:56:24 2008 From: strawman at astraw.com (Andrew Straw) Date: Sun, 27 Jan 2008 13:56:24 -0800 Subject: [SciPy-user] Bayes net question In-Reply-To: <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <478BF5E8.9060405@ucsf.edu> <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> Message-ID: <479CFE08.3090505@astraw.com> Any word yet? This would be a really excellent addition. David Cournapeau wrote: > I actually asked K. Murphy if he would agree on dual licensing BNT and > co under the BSD, for porting to scipy/scikits, and am waiting for his > answer on this, From ramercer at gmail.com Sun Jan 27 17:50:11 2008 From: ramercer at gmail.com (Adam Mercer) Date: Sun, 27 Jan 2008 17:50:11 -0500 Subject: [SciPy-user] scipy.lib.tests.test_blas.test_fblas1_simple failure on Mac OS X In-Reply-To: <479C140A.6040400@ar.media.kyoto-u.ac.jp> References: <799406d60801260732i2d1e2304y18fffec5a8c3a21b@mail.gmail.com> <479C140A.6040400@ar.media.kyoto-u.ac.jp> Message-ID: <799406d60801271450w750f6f33ud03a86eecd399f52@mail.gmail.com> On Jan 27, 2008 12:18 AM, David Cournapeau wrote: > This is likely a symptom of bad argument passing between C and Fortran. > Normally, the problem is supposed to be solved. But maybe leopard has > different convention. > > http://projects.scipy.org/scipy/scipy/ticket/238 Thanks, thats the problem. If I apply the patch from r3387 then all tests pass! Cheers Adam From ramercer at gmail.com Sun Jan 27 17:51:14 2008 From: ramercer at gmail.com (Adam Mercer) Date: Sun, 27 Jan 2008 17:51:14 -0500 Subject: [SciPy-user] Error and fail on test In-Reply-To: <479CFD1D.8040104@simula.no> References: <479CFD1D.8040104@simula.no> Message-ID: <799406d60801271451k51bbccadja4c660e0f225f2c2@mail.gmail.com> On Jan 27, 2008 4:52 PM, Ilmar Wilbers wrote: > ====================================================================== > FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", > line 76, in check_dot > assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) > File > "/Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/numpy/testing/utils.py", > line 158, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > ACTUAL: 7.3632427319432975e-38j > DESIRED: (-9+2j) I ran into this problem, it has been fixed in SVN http://projects.scipy.org/scipy/scipy/changeset/3387 Cheers Adam From yennifersantiago at gmail.com Sun Jan 27 20:59:47 2008 From: yennifersantiago at gmail.com (Yennifer Santiago) Date: Sun, 27 Jan 2008 21:59:47 -0400 Subject: [SciPy-user] Install_SciPy Message-ID: <41bc705b0801271759u71fb3ae6s6e91db8bae442f62@mail.gmail.com> Hello... My last email was the follow: > >I want to install Scipy 0.6.0, I already install the packages python, > >numpy, > > atlas, lapack an SciPy, but when I make the test, after intall SciPy: > > > > carolina at carolinapc :~$ import scipy > > carolina at carolinapc:~$ scipy.test(level=1) > > bash: error de sintaxis cerca de token no esperado `level=1' > > > > What is the problem?? I already could execute the above commands in a python prompt, that was the problem. But now that test produce some errors and I don't understand what is the new problem: carolina at carolinapc:~$ python Test_SciPy.py RuntimeError: module compiled against version 1000002 of C-API but this version of numpy is 1000009 RuntimeError: module compiled against version 1000002 of C-API but this version of numpy is 1000009 Found 9/9 tests for scipy.cluster.vq Found 18/18 tests for scipy.fftpack.basic Found 4/4 tests for scipy.fftpack.helper Found 20/20 tests for scipy.fftpack.pseudo_diffs Found 1/1 tests for scipy.integrate Found 10/10 tests for scipy.integrate.quadpack Found 3/3 tests for scipy.integrate.quadrature Found 6/6 tests for scipy.interpolate Found 6/6 tests for scipy.interpolate.fitpack Found 4/4 tests for scipy.io.array_import Warning: FAILURE importing tests for /usr/lib/python2.4/site-packages/scipy/sparse/sparse.py:21: ImportError: cannot import name cscmux (in ?) Found 13/13 tests for scipy.io.mmio Found 5/5 tests for scipy.io.npfile Found 4/4 tests for scipy.io.recaster Found 16/16 tests for scipy.lib.blas Found 128/128 tests for scipy.lib.blas.fblas Found 42/42 tests for scipy.lib.lapack Found 41/41 tests for scipy.linalg.basic Found 16/16 tests for scipy.linalg.blas Found 72/72 tests for scipy.linalg.decomp Found 128/128 tests for scipy.linalg.fblas Found 6/6 tests for scipy.linalg.iterative Found 4/4 tests for scipy.linalg.lapack Found 7/7 tests for scipy.linalg.matfuncs Found 399/399 tests for scipy.ndimage Found 5/5 tests for scipy.odr Found 8/8 tests for scipy.optimize Found 1/1 tests for scipy.optimize.cobyla Found 10/10 tests for scipy.optimize.nonlin Found 4/4 tests for scipy.optimize.zeros Found 5/5 tests for scipy.signal.signaltools Found 4/4 tests for scipy.signal.wavelets Found 342/342 tests for scipy.special.basic Found 3/3 tests for scipy.special.spfun_stats Found 107/107 tests for scipy.stats Found 73/73 tests for scipy.stats.distributions Found 10/10 tests for scipy.stats.morestats Found 0/0 tests for __main__ .../usr/lib/python2.4/site-packages/scipy/cluster/vq.py:477: UserWarning: One of the clusters is empty. Re-run kmean with a different initialization. warnings.warn("One of the clusters is empty. " exception raised as expected: One of the clusters is empty. Re-run kmean with a different initialization. ................................................Residual: 1.05006950608e-07 ..................../usr/lib/python2.4/site-packages/scipy/interpolate/fitpack2.py:458: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ...... Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. ...............EE............................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ................................................................................................................................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 .............Result may be inaccurate, approximate err = 2.46937553108e-09 ...Result may be inaccurate, approximate err = 1.45880453615e-10 ........................................................................................................../usr/lib/python2.4/site-packages/scipy/ndimage/interpolation.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundaries. Please use "mirror" instead. warnings.warn('Mode "reflect" may yield incorrect results on ' ...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................0.2 0.2 0.2 ......0.2 ..0.2 0.2 0.2 0.2 0.2 .........................................................................................................................................................................................................Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. ...... ====================================================================== ERROR: check_simple_todense (scipy.io.tests.test_mmio.test_mmio_coordinate) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mmio.py", line 152, in check_simple_todense b = mmread(fn).todense() AttributeError: 'numpy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: check_simple_write_read ( scipy.io.tests.test_mmio.test_mmio_coordinate) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mmio.py", line 160, in check_simple_write_read b = scipy.sparse.coo_matrix((V,(I,J)),dims=(5,5)) AttributeError: 'module' object has no attribute 'sparse' ---------------------------------------------------------------------- Ran 1534 tests in 3.993s FAILED (errors=2) -------------- next part -------------- An HTML attachment was scrubbed... URL: From charles.vejnar at isb-sib.ch Mon Jan 28 04:07:38 2008 From: charles.vejnar at isb-sib.ch (Charles Vejnar) Date: Mon, 28 Jan 2008 10:07:38 +0100 Subject: [SciPy-user] Draw a density line on an histogram In-Reply-To: <479A320F.3000803@gmail.com> References: <200801251952.17261.charles.vejnar@isb-sib.ch> <479A320F.3000803@gmail.com> Message-ID: <200801281007.38667.charles.vejnar@isb-sib.ch> Thank you. Ok, with gaussian_kde then linspace (it's a bit different than in R). Is it possible to have a different kernel than "gaussian" ? Charles > > Hi, > > > > I have a series of numbers, and I want to have an idea about their > > distribution. I start with drawing an histogram with matplotlib. > > > > But I would like to have a curve (a "smooth histogram") representing the > > density (which is not a usual function). > > > > Something with kernel density estimates should be possible. Do you have > > any idea ? > > In [6]: from numpy import * > > In [7]: from scipy.stats import gaussian_kde > > In [8]: d = random.standard_normal(1000) > > In [9]: k = gaussian_kde(d) > > In [10]: d.min(), d.max() > Out[10]: (-2.7279408369776075, 3.210698358446658) > > In [11]: x = linspace(-3.5, 3.5, 100) > > In [12]: y = k(x) > > > Now plot y versus x. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? ?-- Umberto Eco From robert.kern at gmail.com Mon Jan 28 11:05:40 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 Jan 2008 10:05:40 -0600 Subject: [SciPy-user] Draw a density line on an histogram In-Reply-To: <200801281007.38667.charles.vejnar@isb-sib.ch> References: <200801251952.17261.charles.vejnar@isb-sib.ch> <479A320F.3000803@gmail.com> <200801281007.38667.charles.vejnar@isb-sib.ch> Message-ID: <479DFD54.3010208@gmail.com> Charles Vejnar wrote: > Thank you. > > Ok, with gaussian_kde then linspace (it's a bit different than in R). > > Is it possible to have a different kernel than "gaussian" ? No one has implemented other kernels for us, no. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ilmarw at simula.no Mon Jan 28 11:31:49 2008 From: ilmarw at simula.no (Ilmar Wilbers) Date: Mon, 28 Jan 2008 17:31:49 +0100 Subject: [SciPy-user] Error and fail on test In-Reply-To: <799406d60801271451k51bbccadja4c660e0f225f2c2@mail.gmail.com> References: <479CFD1D.8040104@simula.no> <799406d60801271451k51bbccadja4c660e0f225f2c2@mail.gmail.com> Message-ID: <479E0375.3050808@simula.no> Hello, Thank you, that worked. It did see that fix, but I thought it was part of version 0.6.0. Now I have another problem, I get the following error: ====================================================================== FAIL: check_x_stride (scipy.linalg.tests.test_fblas.test_cgemv) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/scipy/linalg/tests/test_fblas.py", line 343, in check_x_stride assert_array_almost_equal(desired_y,y) File "/Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/numpy/testing/utils.py", line 232, in assert_array_almost_equal header='Arrays are not almost equal') File "/Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/numpy/testing/utils.py", line 217, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([ 8.31392193 -8.31392193j, -14.72563744+16.72563744j, -13.49905777+17.49905777j], dtype=complex64) y: array([ 8.31392193 -8.31392193j, -14.72563744+16.72563553j, -13.49905777+17.49905777j], dtype=complex64) ---------------------------------------------------------------------- Ran 1719 tests in 4.717s FAILED (failures=1) Out[2]: The weird thing is, the two arrays (x and y) are the same. Any thought, someone? ilmar Adam Mercer wrote: > On Jan 27, 2008 4:52 PM, Ilmar Wilbers wrote: > >> ====================================================================== >> FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", >> line 76, in check_dot >> assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) >> File >> "/Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/numpy/testing/utils.py", >> line 158, in assert_almost_equal >> assert round(abs(desired - actual),decimal) == 0, msg >> AssertionError: >> Items are not equal: >> ACTUAL: 7.3632427319432975e-38j >> DESIRED: (-9+2j) > > I ran into this problem, it has been fixed in SVN > > http://projects.scipy.org/scipy/scipy/changeset/3387 > > Cheers > > Adam > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Mon Jan 28 12:13:33 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 Jan 2008 11:13:33 -0600 Subject: [SciPy-user] Error and fail on test In-Reply-To: <479E0375.3050808@simula.no> References: <479CFD1D.8040104@simula.no> <799406d60801271451k51bbccadja4c660e0f225f2c2@mail.gmail.com> <479E0375.3050808@simula.no> Message-ID: <479E0D3D.2040502@gmail.com> Ilmar Wilbers wrote: > Hello, > > Thank you, that worked. It did see that fix, but I thought it was part > of version 0.6.0. > > Now I have another problem, I get the following error: > ====================================================================== > FAIL: check_x_stride (scipy.linalg.tests.test_fblas.test_cgemv) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/scipy/linalg/tests/test_fblas.py", > line 343, in check_x_stride > assert_array_almost_equal(desired_y,y) > File > "/Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/numpy/testing/utils.py", > line 232, in assert_array_almost_equal > header='Arrays are not almost equal') > File > "/Users/ilmarw/ext/Darwin/lib/python2.5/site-packages/numpy/testing/utils.py", > line 217, in assert_array_compare > assert cond, msg > AssertionError: > Arrays are not almost equal > > (mismatch 33.3333333333%) > x: array([ 8.31392193 -8.31392193j, -14.72563744+16.72563744j, > -13.49905777+17.49905777j], dtype=complex64) > y: array([ 8.31392193 -8.31392193j, -14.72563744+16.72563553j, > -13.49905777+17.49905777j], dtype=complex64) > > ---------------------------------------------------------------------- > Ran 1719 tests in 4.717s > > FAILED (failures=1) > Out[2]: > > The weird thing is, the two arrays (x and y) are the same. Any thought, > someone? They're not. The imaginary components of the second elements are slightly different. It's possible that the tolerance should be relaxed (not all accelerated BLASes use fully IEEE-754 compliant arithmetic), but this probably requires more investigation. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Mon Jan 28 12:58:02 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 28 Jan 2008 12:58:02 -0500 Subject: [SciPy-user] Draw a density line on an histogram In-Reply-To: <200801281007.38667.charles.vejnar@isb-sib.ch> References: <200801251952.17261.charles.vejnar@isb-sib.ch> <479A320F.3000803@gmail.com> <200801281007.38667.charles.vejnar@isb-sib.ch> Message-ID: On 28/01/2008, Charles Vejnar wrote: > Thank you. > > Ok, with gaussian_kde then linspace (it's a bit different than in R). > > Is it possible to have a different kernel than "gaussian" ? Just to be clear: the kernel is Gaussian, but the distribution that you are fitting need not be. Try, for example, squaring all the random values in Robert Kern's example. Taking a Gaussian kernel is in some maximum-entropy sense the most conservative possible choice, assuming as little as possible about your distribution. For periodic distributions I have implemented a von Mises kernel (the periodic analogue of a Gaussian, again in a maximum-entropy sense). Anne From f.braennstroem at gmx.de Mon Jan 28 16:11:03 2008 From: f.braennstroem at gmx.de (Fabian Braennstroem) Date: Mon, 28 Jan 2008 21:11:03 +0000 Subject: [SciPy-user] compare two csv files In-Reply-To: <47990EC3.1070304@astraw.com> References: <47990EC3.1070304@astraw.com> Message-ID: Hi to all, sorry for the bad question... actually I might be in the wrong group... At the beginning I thought, that I only have to compare two columns with numbers in it, but now the two columns could look like: 1st column: 1 2 3 2nd column: 0 5 1 So the result should be a list with entries, which exist in both lists like '1'. A little bit more difficult would be two lists with number and characters. E.g. the lists could look like: 1st column: 1.test 2.test 123.test 123.Test 2nd column: 0.test 123_test 5.test 123.Test The searching/comparing should produce two lists; one with the 'double' fuzzy entries like '123.test' and '123.Test'. Would be nice, if anyone can help!? Thanks! Fabian Andrew Straw schrieb am 01/24/2008 10:18 PM: > Hi Fabian, this is not a direct answer to your question, but you also > may be intrested in matplotlib's mlab.csv2rec() which automatically > creates a recordarray from a csv file. John Hunter and I, to much lesser > degree, have been hacking on this to work for us. Please feel free to > check its suitability for your purposes. > > Fabian Braennstroem wrote: >> Hi, >> I would like to compare two csv file; actually two columns >> from two csv files. >> I would use something like: >> def read_test(): >> start = time.clock() >> reader = csv.reader( file('data.txt') ) >> data = [ map(float, row) for row in reader ] >> data = array(data, dtype = float) >> >> To get my data into an array. >> >> Does anyone have an idea, how to compare the two columns? >> Would be nice! >> Fabian >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> From ysexternal at gmail.com Mon Jan 28 18:35:47 2008 From: ysexternal at gmail.com (Yasir Suhail) Date: Mon, 28 Jan 2008 18:35:47 -0500 Subject: [SciPy-user] thread failure in ATLAS while computing pseudoinverse Message-ID: <6af24c960801281535j631789a2n3f989ff3a3aae952@mail.gmail.com> I am trying to compute the pseudoinverse of a 5300 x 5300 symmetric matrix by scipy.linalg.pinv2 on an Intel x86 machine in a function. The program halts after 20-30 minutes with the following message: assertion !pthread_create( &(ROOT->pid), ATTR, ROOT->fun, ROOT ) failed, line 84 of file ~/software/lx-x86/atb/../ATLAS//src/pthreads/misc/ATL_thread_tree.c The value in /proc/sys/kernel/threads-max is 204800. Any ideas? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Mon Jan 28 22:10:18 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 29 Jan 2008 12:10:18 +0900 Subject: [SciPy-user] optimization advice needed In-Reply-To: <479CFD27.6060908@astraw.com> References: <479CFD27.6060908@astraw.com> Message-ID: <479E991A.8090606@ar.media.kyoto-u.ac.jp> Andrew Straw wrote: > If f() is stationary and you are trying to estimate a and b, isn't this > exactly the case of a Kalman filter for linear f()? And if f() is > non-linear, there are extensions to the Kalman framework to handle this. > Even Kalman is overkill, I think, no (if f is linear) ? A simple wiener filter may be enough, then. Neal, I think the solution will depend on your background and how much time you want to spend on it (as well as the exact nature of the problem you are solving, obviously, such as can you first estimate the model on some data offline, and after estimate new data online, etc...): if you have only a couple of hours to spend, and you don't have background in bayesian statistics, I think it will be overkill. A good introduction in the spirit of what Gael suggested (as I understand it) is to read the first chapter and third chapter of the book "pattern recognition and machine learning" by C. Bishop. That's the best, almost self-contained introduction I can think of on the top of my head. cheers, David From david at ar.media.kyoto-u.ac.jp Mon Jan 28 22:11:21 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 29 Jan 2008 12:11:21 +0900 Subject: [SciPy-user] Install_SciPy In-Reply-To: <41bc705b0801271759u71fb3ae6s6e91db8bae442f62@mail.gmail.com> References: <41bc705b0801271759u71fb3ae6s6e91db8bae442f62@mail.gmail.com> Message-ID: <479E9959.4070309@ar.media.kyoto-u.ac.jp> Yennifer Santiago wrote: > > Hello... > > My last email was the follow: > > > >I want to install Scipy 0.6.0, I already install the packages python, > > >numpy, > > > atlas, lapack an SciPy, but when I make the test, after intall SciPy: > > > > > > carolina at carolinapc :~$ import scipy > > > carolina at carolinapc:~$ scipy.test(level=1) > > > bash: error de sintaxis cerca de token no esperado `level=1' > > > > > > What is the problem?? > > I already could execute the above commands in a python prompt, that > was the problem. But now that test produce some errors and I don't > understand what is the new problem: Those problems won't be a big problem unless you use the specific functions (the IO module is currently revamped, I think). cheers, David From david at ar.media.kyoto-u.ac.jp Mon Jan 28 22:13:39 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 29 Jan 2008 12:13:39 +0900 Subject: [SciPy-user] thread failure in ATLAS while computing pseudoinverse In-Reply-To: <6af24c960801281535j631789a2n3f989ff3a3aae952@mail.gmail.com> References: <6af24c960801281535j631789a2n3f989ff3a3aae952@mail.gmail.com> Message-ID: <479E99E3.6070302@ar.media.kyoto-u.ac.jp> Yasir Suhail wrote: > I am trying to compute the pseudoinverse of a 5300 x 5300 symmetric > matrix by scipy.linalg.pinv2 on an Intel x86 machine in a function. > The program halts after 20-30 minutes with the following message: > > assertion !pthread_create( &(ROOT->pid), ATTR, ROOT->fun, ROOT ) > failed, line 84 of file > ~/software/lx-x86/atb/../ATLAS//src/pthreads/misc/ATL_thread_tree.c > > The value in /proc/sys/kernel/threads-max is 204800. > > Any ideas? > You may want to try the Atlas mailing list instead. You will certainly have more chance to get someone who knows enough about the internal of ATLAS. cheers, David From david at ar.media.kyoto-u.ac.jp Mon Jan 28 22:19:49 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 29 Jan 2008 12:19:49 +0900 Subject: [SciPy-user] Bayes net question In-Reply-To: <479CFE08.3090505@astraw.com> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <478BF5E8.9060405@ucsf.edu> <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> <479CFE08.3090505@astraw.com> Message-ID: <479E9B55.4010205@ar.media.kyoto-u.ac.jp> Andrew Straw wrote: > Any word yet? This would be a really excellent addition. Sorry, I thought I already posted the answer. According to K. Murphy, there is no problem to port it under the BSD, or any other code from him for that matter. Note however that there is a dependency on netlab: I don't know how much this is the case, and netlab is not developed by K. Murphy. cheers, David From cohen at slac.stanford.edu Mon Jan 28 22:33:44 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Mon, 28 Jan 2008 19:33:44 -0800 Subject: [SciPy-user] Bayes net question In-Reply-To: <479E9B55.4010205@ar.media.kyoto-u.ac.jp> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <478BF5E8.9060405@ucsf.edu> <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> <479CFE08.3090505@astraw.com> <479E9B55.4010205@ar.media.kyoto-u.ac.jp> Message-ID: <479E9E98.6070800@slac.stanford.edu> Th netlab license in http://www.ncrg.aston.ac.uk/netlab/lib_lic.txt seems fairly permissive. I am not a license specialist though... J. David Cournapeau wrote: > Andrew Straw wrote: > >> Any word yet? This would be a really excellent addition. >> > Sorry, I thought I already posted the answer. According to K. Murphy, > there is no problem to port it under the BSD, or any other code from him > for that matter. Note however that there is a dependency on netlab: I > don't know how much this is the case, and netlab is not developed by K. > Murphy. > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Mon Jan 28 22:39:36 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 Jan 2008 21:39:36 -0600 Subject: [SciPy-user] Bayes net question In-Reply-To: <479E9B55.4010205@ar.media.kyoto-u.ac.jp> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <478BF5E8.9060405@ucsf.edu> <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> <479CFE08.3090505@astraw.com> <479E9B55.4010205@ar.media.kyoto-u.ac.jp> Message-ID: <479E9FF8.5040809@gmail.com> David Cournapeau wrote: > Andrew Straw wrote: >> Any word yet? This would be a really excellent addition. > Sorry, I thought I already posted the answer. According to K. Murphy, > there is no problem to port it under the BSD, or any other code from him > for that matter. Note however that there is a dependency on netlab: I > don't know how much this is the case, and netlab is not developed by K. > Murphy. Fortunately, it too is available under a BSD license. http://www.ncrg.aston.ac.uk/netlab/lib_lic.txt -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Mon Jan 28 22:42:07 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 29 Jan 2008 12:42:07 +0900 Subject: [SciPy-user] Bayes net question In-Reply-To: <479E9FF8.5040809@gmail.com> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <478BF5E8.9060405@ucsf.edu> <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> <479CFE08.3090505@astraw.com> <479E9B55.4010205@ar.media.kyoto-u.ac.jp> <479E9FF8.5040809@gmail.com> Message-ID: <479EA08F.8040906@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > David Cournapeau wrote: > >> Andrew Straw wrote: >> >>> Any word yet? This would be a really excellent addition. >>> >> Sorry, I thought I already posted the answer. According to K. Murphy, >> there is no problem to port it under the BSD, or any other code from him >> for that matter. Note however that there is a dependency on netlab: I >> don't know how much this is the case, and netlab is not developed by K. >> Murphy. >> > > Fortunately, it too is available under a BSD license. > > http://www.ncrg.aston.ac.uk/netlab/lib_lic.txt > > Stupid me, I did not even think about looking for the netlab license. Well, that's good. I started a proto-page on the scikits wiki. http://scipy.org/scipy/scikits/wiki/BayesNet People who have an interest in contributing could put their name, so that we can get an idea on how is willing to do somthing. cheers, David From strawman at astraw.com Mon Jan 28 22:58:07 2008 From: strawman at astraw.com (Andrew Straw) Date: Mon, 28 Jan 2008 19:58:07 -0800 Subject: [SciPy-user] Bayes net question In-Reply-To: <479EA08F.8040906@ar.media.kyoto-u.ac.jp> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <478BF5E8.9060405@ucsf.edu> <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> <479CFE08.3090505@astraw.com> <479E9B55.4010205@ar.media.kyoto-u.ac.jp> <479E9FF8.5040809@gmail.com> <479EA08F.8040906@ar.media.kyoto-u.ac.jp> Message-ID: <479EA44F.6010807@astraw.com> I can login, but I don't have an "Edit this page" capability. Maybe I must be added to the scikits group or something? My user name is "AndrewStraw" for the trac wiki. Anyhow, I am happy to help the porting as I can... David Cournapeau wrote: > Robert Kern wrote: >> David Cournapeau wrote: >> >>> Andrew Straw wrote: >>> >>>> Any word yet? This would be a really excellent addition. >>>> >>> Sorry, I thought I already posted the answer. According to K. Murphy, >>> there is no problem to port it under the BSD, or any other code from him >>> for that matter. Note however that there is a dependency on netlab: I >>> don't know how much this is the case, and netlab is not developed by K. >>> Murphy. >>> >> Fortunately, it too is available under a BSD license. >> >> http://www.ncrg.aston.ac.uk/netlab/lib_lic.txt >> >> > Stupid me, I did not even think about looking for the netlab license. > Well, that's good. I started a proto-page on the scikits wiki. > > http://scipy.org/scipy/scikits/wiki/BayesNet > > People who have an interest in contributing could put their name, so > that we can get an idea on how is willing to do somthing. > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From dwf at cs.toronto.edu Mon Jan 28 23:21:32 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 28 Jan 2008 23:21:32 -0500 Subject: [SciPy-user] Bayes net question In-Reply-To: <479EA44F.6010807@astraw.com> References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <478BF5E8.9060405@ucsf.edu> <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> <479CFE08.3090505@astraw.com> <479E9B55.4010205@ar.media.kyoto-u.ac.jp> <479E9FF8.5040809@gmail.com> <479EA08F.8040906@ar.media.kyoto-u.ac.jp> <479EA44F.6010807@astraw.com> Message-ID: Likewise. I'd add my name if I could (login is dwf). David On 28-Jan-08, at 10:58 PM, Andrew Straw wrote: > I can login, but I don't have an "Edit this page" capability. Maybe I > must be added to the scikits group or something? My user name is > "AndrewStraw" for the trac wiki. Anyhow, I am happy to help the > porting > as I can... > > David Cournapeau wrote: >> Robert Kern wrote: >>> David Cournapeau wrote: >>> >>>> Andrew Straw wrote: >>>> >>>>> Any word yet? This would be a really excellent addition. >>>>> >>>> Sorry, I thought I already posted the answer. According to K. >>>> Murphy, >>>> there is no problem to port it under the BSD, or any other code >>>> from him >>>> for that matter. Note however that there is a dependency on >>>> netlab: I >>>> don't know how much this is the case, and netlab is not developed >>>> by K. >>>> Murphy. >>>> >>> Fortunately, it too is available under a BSD license. >>> >>> http://www.ncrg.aston.ac.uk/netlab/lib_lic.txt >>> >>> >> Stupid me, I did not even think about looking for the netlab license. >> Well, that's good. I started a proto-page on the scikits wiki. >> >> http://scipy.org/scipy/scikits/wiki/BayesNet >> >> People who have an interest in contributing could put their name, so >> that we can get an idea on how is willing to do somthing. >> >> cheers, >> >> David >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From karl.young at ucsf.edu Mon Jan 28 23:35:43 2008 From: karl.young at ucsf.edu (Young, Karl) Date: Mon, 28 Jan 2008 20:35:43 -0800 Subject: [SciPy-user] Bayes net question References: <20080114162237.74c586f2@jakubik.ta3.sk> <478BE61D.9090309@ucsf.edu> <478BF5E8.9060405@ucsf.edu> <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> <479CFE08.3090505@astraw.com> <479E9B55.4010205@ar.media.kyoto-u.ac.jp> <479E9FF8.5040809@gmail.com> <479EA08F.8040906@ar.media.kyoto-u.ac.jp> Message-ID: <9D202D4E86A4BF47BA6943ABDF21BE78039F0A3C@EXVS06.net.ucsf.edu> I'm willing to contribute (have done some Matlab -> Python porting but unfortunately for this project more IDL -> Python porting). I'm also pretty motivaterd re. seeing this happen as I've been reading some papers by Dimitris Margaritas that provide nice algorithms for feasibly testing independence for continuous variables which would be really helpful for me re. construction of useful Bayes nets. And I'd prefer to work on adding versions of these to a SciPy package (or scikit) - if there not already available in Kevin Murphy's toolbox that is (haven't looked at it in great detail yet). I'll figure out how to add my name to the wiki. Karl Young Center for Imaging of Neurodegenerative Disease, UCSF VA Medical Center, MRS Unit (114M) Phone: (415) 221-4810 x3114 FAX: (415) 668-2864 Email: karl young at ucsf edu -----Original Message----- From: scipy-user-bounces at scipy.org on behalf of David Cournapeau Sent: Mon 1/28/2008 7:42 PM To: SciPy Users List Subject: Re: [SciPy-user] Bayes net question Robert Kern wrote: > David Cournapeau wrote: > >> Andrew Straw wrote: >> >>> Any word yet? This would be a really excellent addition. >>> >> Sorry, I thought I already posted the answer. According to K. Murphy, >> there is no problem to port it under the BSD, or any other code from him >> for that matter. Note however that there is a dependency on netlab: I >> don't know how much this is the case, and netlab is not developed by K. >> Murphy. >> > > Fortunately, it too is available under a BSD license. > > http://www.ncrg.aston.ac.uk/netlab/lib_lic.txt > > Stupid me, I did not even think about looking for the netlab license. Well, that's good. I started a proto-page on the scikits wiki. http://scipy.org/scipy/scikits/wiki/BayesNet People who have an interest in contributing could put their name, so that we can get an idea on how is willing to do somthing. cheers, David _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From mcleane at math.ubc.ca Tue Jan 29 01:27:50 2008 From: mcleane at math.ubc.ca (Mclean Edwards) Date: Mon, 28 Jan 2008 22:27:50 -0800 (PST) Subject: [SciPy-user] openopt vs. cvxopt, 'f' vs. 'd','z' Message-ID: Dmitrey: There seems to be a problem with openopt/cvxopt working together. I am using double ('d') for my cvxopt.base.matrix matrices, and so the line 27 in QP.py: kwargs[fn] = asarray(kwargs[fn], float) caused the program to crash since it tried to treat the double values as floating point. Removing the 'float' and recompiling led to 'Array not contiguous' error. I admit I'm not too such what this means exactly, and was only able to discover that arrays can be ordered in a contiguous fashion or in a FORTRAN fashion. Replacing the above line (27 in QP.py in Rev 166) with: kwargs[fn] = asarray(kwargs[fn], order='C') fixes this problem (by forcing the new array to be contiguous). However errors eventually occur with the other values, so the for loop initial line (25) needed to be changed to: for fn in ('H', 'f', 'A', 'b', 'Aeq', 'beq', 'lb', 'ub'): Recompiling openopt with these changes results in no more errors in my code. I have not tested this for matrix(*, 'z') formats, but it seems to work for the standard floating point format. I do not understand the note: #TODO: handle the case in runProbSolver() but perhaps the original reason for casting to float has now disappeared with more recent changes to runProbSolver(). (I am running off of Subversion's most recent code base.) I am running cvxopt version 0.8.1 (there is a problem with the QP solver in the most recent version 0.9.2). Cheers, Mclean From mcleane at math.ubc.ca Tue Jan 29 02:56:16 2008 From: mcleane at math.ubc.ca (Mclean Edwards) Date: Mon, 28 Jan 2008 23:56:16 -0800 (PST) Subject: [SciPy-user] openopt vs. cvxopt, 'f' vs. 'd','z' In-Reply-To: References: Message-ID: An additional note or two: The problem with 'array not contiguous' happens because I call the cvxopt solver through openopt with a cvxopt matrix instead of an array. Casting the cvxopt.matrix as an array beforehand works with the original openopt code. Comparing the results between order='C' and cvxopt matrices and simply deleting the 'float' requirement, but using numpy arrays, the results appear to be identical. This would suggest that making the change is an improvement to the code. However, I am not getting identical results between calling cvxopt directly and calling cvxopt through openopt. I have not yet been able to track down the difference, although on inspection calling cvxopt through openopt is about 5x slower, and provides worse results. (I'm using QP as a subfunction to another algorithm and the appoximate answers using openopt appear to be worse than calling cvxopt directly.) The differences do not stem from using double. (I converted to float for cvxopt direct use, and there is no perceptual change in my graphs.) -Mclean From dahl.joachim at gmail.com Tue Jan 29 03:31:37 2008 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Tue, 29 Jan 2008 09:31:37 +0100 Subject: [SciPy-user] openopt vs. cvxopt, 'f' vs. 'd','z' In-Reply-To: References: Message-ID: <47347f490801290031m397bb788r8a9383060149c1e4@mail.gmail.com> There might be bugs in the Scipy Array Interface in CVXOPT. If you come across dubious behaviour and believe CVXOPT is to blame, we will appreciate feedback and small code snippets to reproduce the error. CVXOPT is never supposed to crash, so it sounds like you found a significant bug. Best regards joachim On Jan 29, 2008 8:56 AM, Mclean Edwards wrote: > An additional note or two: > > The problem with 'array not contiguous' happens because I call the cvxopt > solver through openopt with a cvxopt matrix instead of an array. > > Casting the cvxopt.matrix as an array beforehand works with the original > openopt code. > > Comparing the results between order='C' and cvxopt matrices > and simply deleting the 'float' requirement, but using numpy arrays, > the results appear to be identical. > > This would suggest that making the change is an improvement to the code. > > However, I am not getting identical results between calling cvxopt > directly and calling cvxopt through openopt. > > I have not yet been able to track down the difference, although on > inspection calling cvxopt through openopt is about 5x slower, and provides > worse results. > > (I'm using QP as a subfunction to another algorithm and > the appoximate answers using openopt appear to be worse than calling > cvxopt directly.) > > The differences do not stem from using double. > (I converted to float for cvxopt direct use, and there is no perceptual > change in my graphs.) > > -Mclean > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Tue Jan 29 04:47:30 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 29 Jan 2008 01:47:30 -0800 Subject: [SciPy-user] Bayes net question In-Reply-To: <479EA44F.6010807@astraw.com> References: <478BF5E8.9060405@ucsf.edu> <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> <479CFE08.3090505@astraw.com> <479E9B55.4010205@ar.media.kyoto-u.ac.jp> <479E9FF8.5040809@gmail.com> <479EA08F.8040906@ar.media.kyoto-u.ac.jp> <479EA44F.6010807@astraw.com> Message-ID: On Jan 28, 2008 7:58 PM, Andrew Straw wrote: > I can login, but I don't have an "Edit this page" capability. Maybe I > must be added to the scikits group or something? My user name is > "AndrewStraw" for the trac wiki. Anyhow, I am happy to help the porting > as I can... Hey Andrew, Try now. Let me know if you are having any more difficulties. I am really excited about this project! Thanks to everyone who is working on this. Cheers, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Tue Jan 29 04:56:48 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 29 Jan 2008 01:56:48 -0800 Subject: [SciPy-user] Bayes net question In-Reply-To: References: <478BF5E8.9060405@ucsf.edu> <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> <479CFE08.3090505@astraw.com> <479E9B55.4010205@ar.media.kyoto-u.ac.jp> <479E9FF8.5040809@gmail.com> <479EA08F.8040906@ar.media.kyoto-u.ac.jp> <479EA44F.6010807@astraw.com> Message-ID: On Jan 28, 2008 8:21 PM, David Warde-Farley wrote: > Likewise. I'd add my name if I could (login is dwf). Hey David, You should have the appropriate Trac permissions now. Thanks for working on this. Cheers, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ PS. Everyone can, of course, use whatever login they want. I would prefer for everyone to use login names with easily identifiable names (e.g., jarrod.millman). This makes it easier to keep track of who is who, which may make it easier to collaborate. From dmitrey.kroshko at scipy.org Tue Jan 29 09:12:41 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Tue, 29 Jan 2008 16:12:41 +0200 Subject: [SciPy-user] openopt vs. cvxopt, 'f' vs. 'd','z' In-Reply-To: References: Message-ID: <479F3459.4050406@scipy.org> I haven't understood all that you've written in your messages, however, I guess sending the code to me could be helpful. However, I use CVXOPT 0.9, and I haven't any problems with CVXOPT QP solver (despite I had never used CVXOPT 0.8.x). On the other hand, fixing the bug is more up to numpy-cvxopt binding, not OpenOpt-CVXOPT: doing only latter will not suppress the bug coplitely, sometimes others will encounter the one as well. So, I guess it's better either for you to control more thoroughly your fortran<->numpy<->CVXOPT matrix conversions, or for numpy developers to fix the one. Regards, D. Mclean Edwards wrote: > Dmitrey: > > There seems to be a problem with openopt/cvxopt working together. > > I am using double ('d') for my cvxopt.base.matrix matrices, and so the > line 27 in QP.py: > > kwargs[fn] = asarray(kwargs[fn], float) > > caused the program to crash since it tried to treat the double values as > floating point. > > Removing the 'float' and recompiling led to > 'Array not contiguous' error. > > I admit I'm not too such what this means exactly, and was only able to > discover that arrays can be ordered in a contiguous fashion or in a > FORTRAN fashion. > > Replacing the above line (27 in QP.py in Rev 166) with: > > kwargs[fn] = asarray(kwargs[fn], order='C') > > fixes this problem (by forcing the new array to be contiguous). > However errors eventually occur with the other > values, so the for loop initial line (25) needed to be changed to: > > for fn in ('H', 'f', 'A', 'b', 'Aeq', 'beq', 'lb', 'ub'): > > Recompiling openopt with these changes > results in no more errors in my code. > I have not tested this for matrix(*, 'z') formats, > but it seems to work for the standard floating point format. > > I do not understand the note: > #TODO: handle the case in runProbSolver() > but perhaps the original reason for casting to float has now disappeared > with more recent changes to runProbSolver(). > (I am running off of Subversion's most recent code base.) > > I am running cvxopt version 0.8.1 (there is a problem with the QP solver > in the most recent version 0.9.2). > > Cheers, > > Mclean > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From massimo.sandal at unibo.it Tue Jan 29 09:34:34 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Tue, 29 Jan 2008 15:34:34 +0100 Subject: [SciPy-user] Draw a density line on an histogram In-Reply-To: <200801281007.38667.charles.vejnar@isb-sib.ch> References: <200801251952.17261.charles.vejnar@isb-sib.ch> <479A320F.3000803@gmail.com> <200801281007.38667.charles.vejnar@isb-sib.ch> Message-ID: <479F397A.8080202@unibo.it> Charles Vejnar ha scritto: > Thank you. > > Ok, with gaussian_kde then linspace (it's a bit different than in R). > > Is it possible to have a different kernel than "gaussian" ? This (scipy-unrelated) package does it (and, in my opinion, is easier to use than the scipy one). http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/python/Statistics/manual/index.xhtml Any chance to merge it into Scipy? (I would like to help now, but I'm drowning in my ph.d. thesis...) m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From bsouthey at gmail.com Tue Jan 29 09:29:43 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 29 Jan 2008 08:29:43 -0600 Subject: [SciPy-user] optimization advice needed In-Reply-To: <479E991A.8090606@ar.media.kyoto-u.ac.jp> References: <479CFD27.6060908@astraw.com> <479E991A.8090606@ar.media.kyoto-u.ac.jp> Message-ID: Hi, If a and b 'are slowing changing', do these have a pattern, a sort of distribution or can you make some assumption about the changes? You really need to address this aspect because it will determine what you can and can not do. If you can model or approximate the changes in a and b then you can apply a vast range of methods. For example, if a is centered around some value and varies, you can model it as a expected value (fixed effect) and the associated variance (random effect) in a mixed effects model. This can be 'easily' extended to hierarchal, multilevel and nonlinear models where your f() becomes some known function. If not you are probably resigned to having to re-solve the problem by throwing away the old data because a and b have changed sufficiently. This appears what you are doing in your second point. This is inefficient because it wastes information and any patterns are ignored. Regards Bruce On Jan 28, 2008 9:10 PM, David Cournapeau wrote: > Andrew Straw wrote: > > If f() is stationary and you are trying to estimate a and b, isn't this > > exactly the case of a Kalman filter for linear f()? And if f() is > > non-linear, there are extensions to the Kalman framework to handle this. > > > Even Kalman is overkill, I think, no (if f is linear) ? A simple wiener > filter may be enough, then. > > Neal, I think the solution will depend on your background and how much > time you want to spend on it (as well as the exact nature of the problem > you are solving, obviously, such as can you first estimate the model on > some data offline, and after estimate new data online, etc...): if you > have only a couple of hours to spend, and you don't have background in > bayesian statistics, I think it will be overkill. > > A good introduction in the spirit of what Gael suggested (as I > understand it) is to read the first chapter and third chapter of the > book "pattern recognition and machine learning" by C. Bishop. That's the > best, almost self-contained introduction I can think of on the top of my > head. > > cheers, > > David > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From zachary.pincus at yale.edu Tue Jan 29 11:10:35 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 29 Jan 2008 11:10:35 -0500 Subject: [SciPy-user] compare two csv files In-Reply-To: References: <47990EC3.1070304@astraw.com> Message-ID: <09B4C5D9-E1F0-42F7-9CBB-6831F1772ED4@yale.edu> Hi Fabian, Perhaps you could specify your problem more clearly. Basically, you want to write a python function that takes two values and calls them "equal" or not (in a fuzzy manner), and then you want to apply that function along a column of data? This is probably best handled in pure python, until you get a little more comfortable with the basic language and want to learn numpy/ scipy. But first things first. So -- you need to specify *exactly* what sort of "fuzzy" matches are acceptable. Then you need to transform this specification into a python function. Given this, it's easy to compare two lists: list1 = [...whatever...] list2 = [...whatever...] def are_fuzzy_equal(element1, element2): ...whatever... list3 = [] for element1, element2 in zip(list1, list2): if are_fuzzy_equal(element1, element2): list3.append(element1) If your question is about how to implement are_fuzzy_equal, you'll need to (a) specify that clearly, and (b) probably want to ask on a basic python-language list. Or I'm sure some folks here would help in a pinch. Zach On Jan 28, 2008, at 4:11 PM, Fabian Braennstroem wrote: > Hi to all, > > sorry for the bad question... actually I might be in the > wrong group... > > > At the beginning I thought, that I only have to compare two > columns with numbers in it, but now the two columns could > look like: > > 1st column: > 1 > 2 > 3 > > > 2nd column: > 0 > 5 > 1 > > So the result should be a list with entries, which exist in > both lists like '1'. > A little bit more difficult would be two lists with number > and characters. > E.g. the lists could look like: > > 1st column: > 1.test > 2.test > 123.test > 123.Test > > 2nd column: > 0.test > 123_test > 5.test > 123.Test > > The searching/comparing should produce two lists; one with > the 'double' fuzzy entries like '123.test' and '123.Test'. > > Would be nice, if anyone can help!? > Thanks! > Fabian > > Andrew Straw schrieb am 01/24/2008 10:18 PM: >> Hi Fabian, this is not a direct answer to your question, but you also >> may be intrested in matplotlib's mlab.csv2rec() which automatically >> creates a recordarray from a csv file. John Hunter and I, to much >> lesser >> degree, have been hacking on this to work for us. Please feel free to >> check its suitability for your purposes. >> >> Fabian Braennstroem wrote: >>> Hi, >>> I would like to compare two csv file; actually two columns >>> from two csv files. >>> I would use something like: >>> def read_test(): >>> start = time.clock() >>> reader = csv.reader( file('data.txt') ) >>> data = [ map(float, row) for row in reader ] >>> data = array(data, dtype = float) >>> >>> To get my data into an array. >>> >>> Does anyone have an idea, how to compare the two columns? >>> Would be nice! >>> Fabian >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From cohen at slac.stanford.edu Tue Jan 29 13:57:55 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Tue, 29 Jan 2008 10:57:55 -0800 Subject: [SciPy-user] Bayes net question In-Reply-To: References: <478BF5E8.9060405@ucsf.edu> <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> <479CFE08.3090505@astraw.com> <479E9B55.4010205@ar.media.kyoto-u.ac.jp> <479E9FF8.5040809@gmail.com> <479EA08F.8040906@ar.media.kyoto-u.ac.jp> <479EA44F.6010807@astraw.com> Message-ID: <479F7733.9010105@slac.stanford.edu> hello, I will be happy to try to contribute, though I will need some time to get in, as I am currently relocating to Europe. I have been using matlab/octave and scipy a lot recently. best, Johann Jarrod Millman wrote: > On Jan 28, 2008 7:58 PM, Andrew Straw wrote: > >> I can login, but I don't have an "Edit this page" capability. Maybe I >> must be added to the scikits group or something? My user name is >> "AndrewStraw" for the trac wiki. Anyhow, I am happy to help the porting >> as I can... >> > > Hey Andrew, > > Try now. Let me know if you are having any more difficulties. > > I am really excited about this project! Thanks to everyone who is > working on this. > > Cheers, > > From mcleane at math.ubc.ca Tue Jan 29 17:14:54 2008 From: mcleane at math.ubc.ca (Mclean Edwards) Date: Tue, 29 Jan 2008 14:14:54 -0800 (PST) Subject: [SciPy-user] openopt vs. cvxopt, 'f' vs. 'd','z' In-Reply-To: <47347f490801290031m397bb788r8a9383060149c1e4@mail.gmail.com> References: <47347f490801290031m397bb788r8a9383060149c1e4@mail.gmail.com> Message-ID: Joachim: There is indeed a problem with cvxopt matrices interfacing with numpy in general. Here is example code: from cvxopt.base import matrix import numpy a = matrix([2,3,4]) b = numpy.asarray(a) c = matrix(b) The last statement (c) returns an 'array not contiguous' error. Changing line (b) to b = numpy.asarray(a, order='C') #(specifies order as contiguous) or b = numpy.array(a) #(copies the array instead of leaving it be) results in no such error. -Mclean On Tue, 29 Jan 2008, Joachim Dahl wrote: > There might be bugs in the Scipy Array Interface in CVXOPT. If you come > across dubious behaviour and believe CVXOPT is to blame, we will > appreciate feedback and small code snippets to reproduce the error. > CVXOPT is never supposed to crash, so it sounds like you found a > significant > bug. > > Best regards > joachim > > On Jan 29, 2008 8:56 AM, Mclean Edwards wrote: > >> An additional note or two: >> >> The problem with 'array not contiguous' happens because I call the cvxopt >> solver through openopt with a cvxopt matrix instead of an array. >> >> Casting the cvxopt.matrix as an array beforehand works with the original >> openopt code. >> >> Comparing the results between order='C' and cvxopt matrices >> and simply deleting the 'float' requirement, but using numpy arrays, >> the results appear to be identical. >> >> This would suggest that making the change is an improvement to the code. >> >> However, I am not getting identical results between calling cvxopt >> directly and calling cvxopt through openopt. >> >> I have not yet been able to track down the difference, although on >> inspection calling cvxopt through openopt is about 5x slower, and provides >> worse results. >> >> (I'm using QP as a subfunction to another algorithm and >> the appoximate answers using openopt appear to be worse than calling >> cvxopt directly.) >> >> The differences do not stem from using double. >> (I converted to float for cvxopt direct use, and there is no perceptual >> change in my graphs.) >> >> -Mclean >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > From mcleane at math.ubc.ca Tue Jan 29 17:25:05 2008 From: mcleane at math.ubc.ca (Mclean Edwards) Date: Tue, 29 Jan 2008 14:25:05 -0800 (PST) Subject: [SciPy-user] openopt vs. cvxopt, 'f' vs. 'd','z' In-Reply-To: <479F3459.4050406@scipy.org> References: <479F3459.4050406@scipy.org> Message-ID: The cvxopt/numpy problem is detailed more precisely in my response to Joachim. (also on SciPy-user) My remaining complaint with openopt-cvxopt is that arrays (or matrices) are required to be floating point (float) in QP.py. Why is this the case, especially since cvxopt itself handles double, complex, and integer matrices? Also, I'm getting slightly different numbers when using openopt. I'm in the process of tracking this problem down. I'll send the problem code when I isolate it. Cheers, Mclean On Tue, 29 Jan 2008, dmitrey wrote: > I haven't understood all that you've written in your messages, however, > I guess sending the code to me could be helpful. However, I use CVXOPT > 0.9, and I haven't any problems with CVXOPT QP solver (despite I had > never used CVXOPT 0.8.x). > > On the other hand, fixing the bug is more up to numpy-cvxopt binding, > not OpenOpt-CVXOPT: doing only latter will not suppress the bug > coplitely, sometimes others will encounter the one as well. > > So, I guess it's better either for you to control more thoroughly your > fortran<->numpy<->CVXOPT matrix conversions, or for numpy developers to > fix the one. > > Regards, D. > > Mclean Edwards wrote: >> Dmitrey: >> >> There seems to be a problem with openopt/cvxopt working together. >> >> I am using double ('d') for my cvxopt.base.matrix matrices, and so the >> line 27 in QP.py: >> >> kwargs[fn] = asarray(kwargs[fn], float) >> >> caused the program to crash since it tried to treat the double values as >> floating point. >> >> Removing the 'float' and recompiling led to >> 'Array not contiguous' error. >> >> I admit I'm not too such what this means exactly, and was only able to >> discover that arrays can be ordered in a contiguous fashion or in a >> FORTRAN fashion. >> >> Replacing the above line (27 in QP.py in Rev 166) with: >> >> kwargs[fn] = asarray(kwargs[fn], order='C') >> >> fixes this problem (by forcing the new array to be contiguous). >> However errors eventually occur with the other >> values, so the for loop initial line (25) needed to be changed to: >> >> for fn in ('H', 'f', 'A', 'b', 'Aeq', 'beq', 'lb', 'ub'): >> >> Recompiling openopt with these changes >> results in no more errors in my code. >> I have not tested this for matrix(*, 'z') formats, >> but it seems to work for the standard floating point format. >> >> I do not understand the note: >> #TODO: handle the case in runProbSolver() >> but perhaps the original reason for casting to float has now disappeared >> with more recent changes to runProbSolver(). >> (I am running off of Subversion's most recent code base.) >> >> I am running cvxopt version 0.8.1 (there is a problem with the QP solver >> in the most recent version 0.9.2). >> >> Cheers, >> >> Mclean >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From roger.herikstad at gmail.com Tue Jan 29 19:09:58 2008 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Wed, 30 Jan 2008 08:09:58 +0800 Subject: [SciPy-user] pairwise difference between large sets of arrays Message-ID: Hi all, I am trying to cluster sets of arrays, typically consisting of ~ 10,000 arrays, each about 1600 points long, using the Pycluster package ( http://bonsai.ims.u-tokyo.ac.jp/%7Emdehoon/software/cluster/software.htm#pycluster), but my problem is that I can't seem to create the diffence matrix needed. Using the zeros functions to preallocate the space, it raises a ValueError saying dimensions too large. Now, I realise this might not be strictly relevant to this list, but I was wondering if anyone knew what the limits are for creating arrays like this? Is it an allocation error, or some other restriction in the numpy package? Does anyone know of alternative ways of achieving this clustering that does not require preallocation of such a large matrix in python? ~ Thanks ~ Roger -------------- next part -------------- An HTML attachment was scrubbed... URL: From Karl.Young at ucsf.edu Tue Jan 29 19:19:32 2008 From: Karl.Young at ucsf.edu (Karl Young) Date: Tue, 29 Jan 2008 16:19:32 -0800 Subject: [SciPy-user] Bayes net question In-Reply-To: References: <478BF5E8.9060405@ucsf.edu> <4410BB1B-E00B-4C0D-A4D5-015576A3D6A7@cs.toronto.edu> <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> <479CFE08.3090505@astraw.com> <479E9B55.4010205@ar.media.kyoto-u.ac.jp> <479E9FF8.5040809@gmail.com> <479EA08F.8040906@ar.media.kyoto-u.ac.jp> <479EA44F.6010807@astraw.com> Message-ID: <479FC294.3060800@ucsf.edu> Ditto (can't edit pages - login is kyoung) >On Jan 28, 2008 8:21 PM, David Warde-Farley wrote: > > >>Likewise. I'd add my name if I could (login is dwf). >> >> > >Hey David, > >You should have the appropriate Trac permissions now. > >Thanks for working on this. > >Cheers, > > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From drudd at drudd.com Tue Jan 29 20:18:34 2008 From: drudd at drudd.com (Douglas Rudd) Date: Tue, 29 Jan 2008 20:18:34 -0500 Subject: [SciPy-user] fftpack error with large arrays Message-ID: <479FD06A.9020508@drudd.com> Hi, I'm trying to do a 512^3 float32 fft but I'm getting a strange error (output from ipython included below). The code works up to 384^3, but not up to 512^3. I've looked at the source to scipy and can't find where this error is being generated. Any ideas? Doug In [1]: from scipy import zeros In [2]: from scipy.fftpack import fftn In [3]: g = zeros( (512,512,512) ) In [4]: fftn(g) --------------------------------------------------------------------------- Traceback (most recent call last) /Users/drudd/ in () /sw/lib/python2.5/site-packages/scipy/fftpack/basic.py in fftn(x, shape, axes, overwrite_x) 298 overwrite_x = 1 299 work_function = fftpack.zfftnd --> 300 return _raw_fftnd(tmp,shape,axes,1,overwrite_x,work_function) 301 302 /sw/lib/python2.5/site-packages/scipy/fftpack/basic.py in _raw_fftnd(x, s, axes, direction, overwrite_x, work_function) 235 x = _fix_shape(x,s[i],i) 236 if axes is None: --> 237 return work_function(x,s,direction,overwrite_x=overwrite_x) 238 239 #XXX: should we allow/check for repeated indices in axes? : dimensions too large. From dahl.joachim at gmail.com Wed Jan 30 02:31:29 2008 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Wed, 30 Jan 2008 08:31:29 +0100 Subject: [SciPy-user] openopt vs. cvxopt, 'f' vs. 'd','z' In-Reply-To: References: <47347f490801290031m397bb788r8a9383060149c1e4@mail.gmail.com> Message-ID: <47347f490801292331o4f8b5d59l7daadb98c75e3528@mail.gmail.com> Dear Mclean, I think the bug has been fixed in newer versions of CVXOPT. A new version will be released soon, which will hopefully solve some of the problems you experienced with the QP solver. best regards joachim On Jan 29, 2008 11:14 PM, Mclean Edwards wrote: > Joachim: > > There is indeed a problem with cvxopt matrices interfacing with numpy in > general. > > Here is example code: > > from cvxopt.base import matrix > import numpy > > a = matrix([2,3,4]) > b = numpy.asarray(a) > c = matrix(b) > > > The last statement (c) returns an 'array not contiguous' error. > Changing line (b) to > > b = numpy.asarray(a, order='C') > #(specifies order as contiguous) > > or > > b = numpy.array(a) > #(copies the array instead of leaving it be) > > results in no such error. > > > -Mclean > > On Tue, 29 Jan 2008, Joachim Dahl wrote: > > > There might be bugs in the Scipy Array Interface in CVXOPT. If you > come > > across dubious behaviour and believe CVXOPT is to blame, we will > > appreciate feedback and small code snippets to reproduce the error. > > CVXOPT is never supposed to crash, so it sounds like you found a > > significant > > bug. > > > > Best regards > > joachim > > > > On Jan 29, 2008 8:56 AM, Mclean Edwards wrote: > > > >> An additional note or two: > >> > >> The problem with 'array not contiguous' happens because I call the > cvxopt > >> solver through openopt with a cvxopt matrix instead of an array. > >> > >> Casting the cvxopt.matrix as an array beforehand works with the > original > >> openopt code. > >> > >> Comparing the results between order='C' and cvxopt matrices > >> and simply deleting the 'float' requirement, but using numpy arrays, > >> the results appear to be identical. > >> > >> This would suggest that making the change is an improvement to the > code. > >> > >> However, I am not getting identical results between calling cvxopt > >> directly and calling cvxopt through openopt. > >> > >> I have not yet been able to track down the difference, although on > >> inspection calling cvxopt through openopt is about 5x slower, and > provides > >> worse results. > >> > >> (I'm using QP as a subfunction to another algorithm and > >> the appoximate answers using openopt appear to be worse than calling > >> cvxopt directly.) > >> > >> The differences do not stem from using double. > >> (I converted to float for cvxopt direct use, and there is no perceptual > >> change in my graphs.) > >> > >> -Mclean > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://projects.scipy.org/mailman/listinfo/scipy-user > >> > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Wed Jan 30 04:40:45 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 30 Jan 2008 01:40:45 -0800 Subject: [SciPy-user] Bayes net question In-Reply-To: <479FC294.3060800@ucsf.edu> References: <478DB3A5.2090909@ar.media.kyoto-u.ac.jp> <479CFE08.3090505@astraw.com> <479E9B55.4010205@ar.media.kyoto-u.ac.jp> <479E9FF8.5040809@gmail.com> <479EA08F.8040906@ar.media.kyoto-u.ac.jp> <479EA44F.6010807@astraw.com> <479FC294.3060800@ucsf.edu> Message-ID: On Jan 29, 2008 4:19 PM, Karl Young wrote: > Ditto (can't edit pages - login is kyoung) Try it now and let me know if you have any problems. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From mcleane at math.ubc.ca Wed Jan 30 14:13:40 2008 From: mcleane at math.ubc.ca (Mclean Edwards) Date: Wed, 30 Jan 2008 11:13:40 -0800 (PST) Subject: [SciPy-user] openopt vs. cvxopt, 'f' vs. 'd','z' In-Reply-To: <47347f490801292331o4f8b5d59l7daadb98c75e3528@mail.gmail.com> References: <47347f490801290031m397bb788r8a9383060149c1e4@mail.gmail.com> <47347f490801292331o4f8b5d59l7daadb98c75e3528@mail.gmail.com> Message-ID: Dear Joachim, Thank you for the update. I did try this agains the 0.9.2 version of cvxopt, and received the same error. This error has been reproduced on cvxopt-0.8.1 on linux32 and linux64 machines and cvxopt-0.9.2 on my linux32 machine. I have not been able to compile cvxopt on the solaris server, as it complains about _Imaginary_I not being declared when compiling base.c. I've been told this might be due to a different version of gcc on that machine. Awaiting the new version, -Mclean On Wed, 30 Jan 2008, Joachim Dahl wrote: > Dear Mclean, > > I think the bug has been fixed in newer versions of CVXOPT. > A new version will be released soon, which will hopefully solve > some of the problems you experienced with the QP solver. > > best regards > joachim > > On Jan 29, 2008 11:14 PM, Mclean Edwards wrote: > >> Joachim: >> >> There is indeed a problem with cvxopt matrices interfacing with numpy in >> general. >> >> Here is example code: >> >> from cvxopt.base import matrix >> import numpy >> >> a = matrix([2,3,4]) >> b = numpy.asarray(a) >> c = matrix(b) >> >> >> The last statement (c) returns an 'array not contiguous' error. >> Changing line (b) to >> >> b = numpy.asarray(a, order='C') >> #(specifies order as contiguous) >> >> or >> >> b = numpy.array(a) >> #(copies the array instead of leaving it be) >> >> results in no such error. >> >> >> -Mclean >> >> On Tue, 29 Jan 2008, Joachim Dahl wrote: >> >>> There might be bugs in the Scipy Array Interface in CVXOPT. If you >> come >>> across dubious behaviour and believe CVXOPT is to blame, we will >>> appreciate feedback and small code snippets to reproduce the error. >>> CVXOPT is never supposed to crash, so it sounds like you found a >>> significant >>> bug. >>> >>> Best regards >>> joachim >>> >>> On Jan 29, 2008 8:56 AM, Mclean Edwards wrote: >>> >>>> An additional note or two: >>>> >>>> The problem with 'array not contiguous' happens because I call the >> cvxopt >>>> solver through openopt with a cvxopt matrix instead of an array. >>>> >>>> Casting the cvxopt.matrix as an array beforehand works with the >> original >>>> openopt code. >>>> >>>> Comparing the results between order='C' and cvxopt matrices >>>> and simply deleting the 'float' requirement, but using numpy arrays, >>>> the results appear to be identical. >>>> >>>> This would suggest that making the change is an improvement to the >> code. >>>> >>>> However, I am not getting identical results between calling cvxopt >>>> directly and calling cvxopt through openopt. >>>> >>>> I have not yet been able to track down the difference, although on >>>> inspection calling cvxopt through openopt is about 5x slower, and >> provides >>>> worse results. >>>> >>>> (I'm using QP as a subfunction to another algorithm and >>>> the appoximate answers using openopt appear to be worse than calling >>>> cvxopt directly.) >>>> >>>> The differences do not stem from using double. >>>> (I converted to float for cvxopt direct use, and there is no perceptual >>>> change in my graphs.) >>>> >>>> -Mclean >>>> _______________________________________________ >>>> SciPy-user mailing list >>>> SciPy-user at scipy.org >>>> http://projects.scipy.org/mailman/listinfo/scipy-user >>>> >>> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > From mcleane at math.ubc.ca Wed Jan 30 14:21:42 2008 From: mcleane at math.ubc.ca (Mclean Edwards) Date: Wed, 30 Jan 2008 11:21:42 -0800 (PST) Subject: [SciPy-user] openopt vs. cvxopt, 'f' vs. 'd','z' In-Reply-To: References: <479F3459.4050406@scipy.org> Message-ID: The numbers turn out to be the same, as I had previously made an error on my end. (I needed r.xf[-1] for my optimal value instead of r.ff, due to a problem transformation.) The speed for openopt-cvxopt and cvxopt are also comparable (~15% overhead for a couple of runs on small problems). Dmitrey, you deserve some praise. Openopt is a good package, and I am very glad you are developing it. When I'm finally able to get some preliminary code done myself, I would be glad to contribute. I'm off to play around some more with the NLP solvers. Cheers, Mclean From dmitrey.kroshko at scipy.org Wed Jan 30 14:40:06 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 30 Jan 2008 21:40:06 +0200 Subject: [SciPy-user] openopt vs. cvxopt, 'f' vs. 'd','z' In-Reply-To: References: <479F3459.4050406@scipy.org> Message-ID: <47A0D296.70300@scipy.org> Mclean Edwards wrote: > The numbers turn out to be the same, as I had previously made an error on > my end. (I needed r.xf[-1] for my optimal value instead of r.ff, due > to a problem transformation.) > The speed for openopt-cvxopt and cvxopt are > also comparable (~15% overhead for a couple of runs on small problems). > As for speed, there is the following issue: cvxopt LP & QP solvers require a parameter True/False to treat problem sparse or dense. I decided not to overwhelm OO users additional parameters, moreover, lpSolve and glpk has no the one, they determine it by themselves, according to sparsity of matrices. So I decided to call "sparse" CVXOPT solvers if numberNonZeros/FullSize<0.3, and "dense" otherwise. So the time elapsed and results can be a little bit different (because other algs were used). > Dmitrey, you deserve some praise. Openopt is a good package, and I am > very glad you are developing it. Thank you, however, it would be much more better, would anyone mention something like that in my guestbook - it could help me to achieve a finance support via grant. Regards, D. > When I'm finally able to get some > preliminary code done myself, I would be glad to contribute. > > I'm off to play around some more with the NLP solvers. > > Cheers, > > Mclean > From cscheid at sci.utah.edu Wed Jan 30 20:47:17 2008 From: cscheid at sci.utah.edu (Carlos Scheidegger) Date: Wed, 30 Jan 2008 18:47:17 -0700 Subject: [SciPy-user] assign to diagonal values? Message-ID: <47A128A5.7010406@sci.utah.edu> Say I have a matrix m whose diagonal values I want to replace with the contents of vector new_diagonal. Right now, I'm using m -= diag(m) m += diag(new_diagonal) but that seems pretty wasteful (especially if diag() creates new arrays, which I suspect it does). Is there a standard (or any efficient) way to assign the diagonal values directly without an explicit for loop? Googling for "site:www.scipy.org assign diagonal" only returned results for sparse matrices, which is not the case here - these are plain numpy arrays. Thanks in advance, -carlos From mcleane at math.ubc.ca Wed Jan 30 22:02:46 2008 From: mcleane at math.ubc.ca (Mclean Edwards) Date: Wed, 30 Jan 2008 19:02:46 -0800 (PST) Subject: [SciPy-user] assign to diagonal values? In-Reply-To: <47A128A5.7010406@sci.utah.edu> References: <47A128A5.7010406@sci.utah.edu> Message-ID: Hello, m -= diag(m) subtracts diag(m) from each row of m for me, if m is a numpy array, so your method could be far worse than wasteful. There may be an easier way to implement, but at its root it will be similar to the simple for loop: for i in range(len(new_diagonal)): m[i][i] = new_diagonal[i] assuming that new_diagonal is already a 1D array. If not, try: new_diagonal = diag(new_diagonal) first then run. If your dimensions are (very) large, you may have increased performance with for i in xrange(len(new_diagonal)): Hope this is helpful, Mclean On Wed, 30 Jan 2008, Carlos Scheidegger wrote: > Say I have a matrix m whose diagonal values I want to replace with the > contents of vector new_diagonal. Right now, I'm using > > m -= diag(m) > m += diag(new_diagonal) > > but that seems pretty wasteful (especially if diag() creates new arrays, which > I suspect it does). Is there a standard (or any efficient) way to assign the > diagonal values directly without an explicit for loop? Googling for > "site:www.scipy.org assign diagonal" only returned results for sparse > matrices, which is not the case here - these are plain numpy arrays. > > Thanks in advance, > -carlos > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From aisaac at american.edu Wed Jan 30 22:08:01 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 30 Jan 2008 22:08:01 -0500 Subject: [SciPy-user] assign to diagonal values? In-Reply-To: <47A128A5.7010406@sci.utah.edu> References: <47A128A5.7010406@sci.utah.edu> Message-ID: Try the function below (from pyGAUSS). hth, Alan Isaac #diagrv: insert v as diagonal of matrix x (2D only!) def diagrv(x,v,copy=True): assert(len(x.shape)==2), "For 2-d arrays only." x = numpy.matrix( x, copy=copy ) stride = 1 + x.shape[1] x.flat[ slice(0,None,stride) ] = v return x From tjhnson at gmail.com Wed Jan 30 22:08:02 2008 From: tjhnson at gmail.com (Tom Johnson) Date: Wed, 30 Jan 2008 19:08:02 -0800 Subject: [SciPy-user] exp2 or 2** Message-ID: Are there reasons to use scipy.special.exp2 over 2** when operating on arrays? If so, what are they? Using 2** seems to be faster... From barrywark at gmail.com Wed Jan 30 22:13:01 2008 From: barrywark at gmail.com (Barry Wark) Date: Wed, 30 Jan 2008 19:13:01 -0800 Subject: [SciPy-user] unable to edit scikits Trac Message-ID: I've been unable to edit pages on the scikits Trac wiki (http://scipy.org/scipy/scikits) for several days now. Even after logging in, the edit page link is missing. I'm experiencing the same problem in Safari 3 and Camino (OS X 10.5). Has anyone else come across this issue (and maybe a solution)? Thanks, Barry From millman at berkeley.edu Wed Jan 30 22:19:33 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 30 Jan 2008 19:19:33 -0800 Subject: [SciPy-user] unable to edit scikits Trac In-Reply-To: References: Message-ID: I don't see your name in the list of users who can edit the wiki. Tell me your username and I will give you permission. On Jan 30, 2008 7:13 PM, Barry Wark wrote: > I've been unable to edit pages on the scikits Trac wiki > (http://scipy.org/scipy/scikits) for several days now. Even after > logging in, the edit page link is missing. I'm experiencing the same > problem in Safari 3 and Camino (OS X 10.5). Has anyone else come > across this issue (and maybe a solution)? > > Thanks, > Barry > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From dwf at cs.toronto.edu Wed Jan 30 22:44:42 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 30 Jan 2008 22:44:42 -0500 Subject: [SciPy-user] assign to diagonal values? In-Reply-To: References: <47A128A5.7010406@sci.utah.edu> Message-ID: On 30-Jan-08, at 10:08 PM, Alan G Isaac wrote: > #diagrv: insert v as diagonal of matrix x (2D only!) > def diagrv(x,v,copy=True): > assert(len(x.shape)==2), "For 2-d arrays only." > x = numpy.matrix( x, copy=copy ) > stride = 1 + x.shape[1] > x.flat[ slice(0,None,stride) ] = v > return x Ooh that is clever. This should really go on the Cookbook page. Perhaps more generally, for in place modification: def setdiag(m, d): assert(len(x.shape) == 2) stride = 1 + x.shape[1] m.flat[slice(0,None,stride)] = d Regards, David From barrywark at gmail.com Wed Jan 30 23:22:03 2008 From: barrywark at gmail.com (Barry Wark) Date: Wed, 30 Jan 2008 20:22:03 -0800 Subject: [SciPy-user] unable to edit scikits Trac In-Reply-To: References: Message-ID: Jarrod, My username is barrywark. I used to be able to edit (I'm the author of the scikits.ann ANN wrapper) but edit permission somehow dissapeared. Thanks for helping me out. Barry On Jan 30, 2008, at 7:19 PM, "Jarrod Millman" wrote: > I don't see your name in the list of users who can edit the wiki. > Tell me your username and I will give you permission. > > On Jan 30, 2008 7:13 PM, Barry Wark wrote: >> I've been unable to edit pages on the scikits Trac wiki >> (http://scipy.org/scipy/scikits) for several days now. Even after >> logging in, the edit page link is missing. I'm experiencing the same >> problem in Safari 3 and Camino (OS X 10.5). Has anyone else come >> across this issue (and maybe a solution)? >> >> Thanks, >> Barry >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Jarrod Millman > Computational Infrastructure for Research Labs > 10 Giannini Hall, UC Berkeley > phone: 510.643.4014 > http://cirl.berkeley.edu/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From millman at berkeley.edu Wed Jan 30 23:29:34 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 30 Jan 2008 20:29:34 -0800 Subject: [SciPy-user] unable to edit scikits Trac In-Reply-To: References: Message-ID: On Jan 30, 2008 8:22 PM, Barry Wark wrote: > My username is barrywark. I used to be able to edit (I'm the author of > the scikits.ann ANN wrapper) but edit permission somehow dissapeared. > Thanks for helping me out. No problem. You should have access now. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From anand.prabhakar.patil at gmail.com Thu Jan 31 00:28:22 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Wed, 30 Jan 2008 21:28:22 -0800 Subject: [SciPy-user] assign to diagonal values? In-Reply-To: References: <47A128A5.7010406@sci.utah.edu> Message-ID: <2bc7a5a50801302128g79da7ea6jc95ef9abb00b3785@mail.gmail.com> On Jan 30, 2008 7:44 PM, David Warde-Farley wrote: > On 30-Jan-08, at 10:08 PM, Alan G Isaac wrote: > > > #diagrv: insert v as diagonal of matrix x (2D only!) > > def diagrv(x,v,copy=True): > > assert(len(x.shape)==2), "For 2-d arrays only." > > x = numpy.matrix( x, copy=copy ) > > stride = 1 + x.shape[1] > > x.flat[ slice(0,None,stride) ] = v > > return x > > Ooh that is clever. This should really go on the Cookbook page. > > Perhaps more generally, for in place modification: > > def setdiag(m, d): > assert(len(x.shape) == 2) > stride = 1 + x.shape[1] > m.flat[slice(0,None,stride)] = d Yeah, that's awesome. I have so many for-loops littering my code for setting diagonals. Here's an nd-version: def setdiag(a, d): assert(all([s == len(d) for s in a.shape])) stride = 1+sum(cumprod(a.shape[:-1])) a.flat[::stride] = d Cheers, Anand From mcleane at math.ubc.ca Thu Jan 31 00:41:34 2008 From: mcleane at math.ubc.ca (Mclean Edwards) Date: Wed, 30 Jan 2008 21:41:34 -0800 (PST) Subject: [SciPy-user] Bug in openopt: objFunrelated Message-ID: For a user supplied gradient that is not iterable, user_df in objFunRelated.py returns an UnboundLocalError since user_df_func in reference before it is assigned. Looking at the code, this is obvious. Suggest a line after 116 as an else to the if on 112: else: user_df_func = p.user.df When openopt is compiled with this change, the solver runs without crashing, and my NLP runs to an approximate solution on all the major solvers. I see no reason not to adopt this change. Cheers, Mclean From peridot.faceted at gmail.com Thu Jan 31 01:37:21 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 31 Jan 2008 01:37:21 -0500 Subject: [SciPy-user] assign to diagonal values? In-Reply-To: <47A128A5.7010406@sci.utah.edu> References: <47A128A5.7010406@sci.utah.edu> Message-ID: On 30/01/2008, Carlos Scheidegger wrote: > Say I have a matrix m whose diagonal values I want to replace with the > contents of vector new_diagonal. Right now, I'm using > > m -= diag(m) > m += diag(new_diagonal) If m is n by n, how about this? m[range(n),range(n)]=new_diagonal Slightly less efficient than using strides as suggested elsewhere if the array is contiguous in memory, but it'll work no matter how m is laid out and it's clear. It also generalizes, of course. (Incidentally, if it seems like this should set all elements of m, this is what I would have thought too, but it's not how numpy works. For that use the N._ix function.) Anne From mcleane at math.ubc.ca Thu Jan 31 03:00:51 2008 From: mcleane at math.ubc.ca (Mclean Edwards) Date: Thu, 31 Jan 2008 00:00:51 -0800 (PST) Subject: [SciPy-user] assign to diagonal values? In-Reply-To: References: <47A128A5.7010406@sci.utah.edu> Message-ID: To make up for my earlier response, I have coded a small timer routine that generates a 1000x1000 uniformly random matrix with values between 0 and 1 (using cvxopt), then tests the code suggested 10 000 times each with randomly generated diagonal vectors. I have a slow system, but the averaged results are: 0.8 seconds to run David's striding code (in place modification) 6 seconds to run Anne's code 18 seconds to run a for-loop This is an impressive speed up. For smaller matrices (100x100) there is still one order of magnitude between for-next loops and the suggested setdiag. Code available upon request (requires cvxopt). Cheers, Mclean From william.ratcliff at gmail.com Thu Jan 31 03:35:19 2008 From: william.ratcliff at gmail.com (william ratcliff) Date: Thu, 31 Jan 2008 03:35:19 -0500 Subject: [SciPy-user] assign to diagonal values? In-Reply-To: References: <47A128A5.7010406@sci.utah.edu> Message-ID: <827183970801310035v1bde4e80maf547a17bce5a2bf@mail.gmail.com> Just curious--is it safe to use the assert statement for anything beyond debugging in case someone actually runs with optimization? Cheers, William On Jan 31, 2008 3:00 AM, Mclean Edwards wrote: > > To make up for my earlier response, I have coded a small timer routine > that generates a 1000x1000 uniformly random matrix with values between 0 > and 1 (using cvxopt), then tests the code suggested 10 000 times each > with randomly generated diagonal vectors. > > I have a slow system, but the averaged results are: > > 0.8 seconds to run David's striding code (in place modification) > > 6 seconds to run Anne's code > > 18 seconds to run a for-loop > > This is an impressive speed up. > > For smaller matrices (100x100) there is still one order of magnitude > between for-next loops and the suggested setdiag. > > Code available upon request (requires cvxopt). > > Cheers, > > Mclean > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Thu Jan 31 03:48:15 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 31 Jan 2008 10:48:15 +0200 Subject: [SciPy-user] Bug in openopt: objFunrelated In-Reply-To: References: Message-ID: <47A18B4F.4090203@scipy.org> Could you provide a code that yields the error? Regards, D. Mclean Edwards wrote: > For a user supplied gradient that is not iterable, user_df in > objFunRelated.py returns an UnboundLocalError since user_df_func in > reference before it is assigned. > > Looking at the code, this is obvious. > > Suggest a line after 116 as an else to the if on 112: > > else: user_df_func = p.user.df > > When openopt is compiled with this change, the solver runs without > crashing, and my NLP runs to an approximate solution on all the major > solvers. I see no reason not to adopt this change. > > > Cheers, > > Mclean > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From peridot.faceted at gmail.com Thu Jan 31 10:43:21 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 31 Jan 2008 10:43:21 -0500 Subject: [SciPy-user] assign to diagonal values? In-Reply-To: <827183970801310035v1bde4e80maf547a17bce5a2bf@mail.gmail.com> References: <47A128A5.7010406@sci.utah.edu> <827183970801310035v1bde4e80maf547a17bce5a2bf@mail.gmail.com> Message-ID: On 31/01/2008, william ratcliff wrote: > Just curious--is it safe to use the assert statement for anything beyond > debugging in case someone actually runs with optimization? If debugging is turned off, then the condition in assert is not checked. So no. More importantly, AssertionErrors *mean* that there is a bug in your program. Not that it got invlaid input, not that it's out of some resource, they mean your program has a bug. So it's okay to turn them off if you're in a hurry, because a properly-working program never signals AssertionError no matter what you feed it. And really, how hard is it to replace assert c with if not c: raise ValueError ? Anne