From Martin.Rutzinger at uibk.ac.at Thu Jun 1 05:53:35 2006 From: Martin.Rutzinger at uibk.ac.at (Martin Rutzinger) Date: Thu, 1 Jun 2006 11:53:35 +0200 Subject: [SciPy-user] triangulation error Message-ID: <1149155615.447eb91f1ce33@web-mail1.uibk.ac.at> hi list! i downloaded the newest version of scipy from the cvs. i try to triangulate coordinates looking something like this: x-values: [ 81304.316 81304.75 81305.185 81305.673 81306.053 81305.301 ...] y-values: [ 7978.127 7976.726 7975.31 7973.745 7972.514 7973.594 7974.991 ...] tri = Triangulation(xt,yt) if i run the triangulation from delaunay i get the error: "Speicherzugriffsfehler" (memory access error). has anybody experience with triangulating data with scipy? thank you for helping martin From simon.anders at uibk.ac.at Thu Jun 1 09:48:13 2006 From: simon.anders at uibk.ac.at (Simon Anders) Date: Thu, 01 Jun 2006 15:48:13 +0200 Subject: [SciPy-user] triangulation error In-Reply-To: <1149155615.447eb91f1ce33@web-mail1.uibk.ac.at> References: <1149155615.447eb91f1ce33@web-mail1.uibk.ac.at> Message-ID: <447EF01D.1010802@uibk.ac.at> Hi Martin, Martin Rutzinger schrieb: > has anybody experience with triangulating data with scipy? No, but I recently used Akima's surface interpolation algorithm, which in turn uses Delaunay triangulation. For this purpose, I've written a SWIG interface to wrap Akima's Fortan code for Python. Akima's code (ACM Collected Algorithms, http://www.acm.org/pubs/calgo/ , Algorithm No. 761) uses Renka's Delauney triangulation code (called TRIPACK, ACM CALGO, Nr. 772), and this all works well for me. So, if your problems persists, I can send you my code. I haven't looked yet at NumPy's triangulation routine (where is it in the CVS tree?) but I would be surprised if it does not also link to Renka's Fortran code, and hence, I might be able to help getting this to work if necessary. HTH Simon -- +--- | Simon Anders, Dipl. Phys. | Institut fuer Theoretische Physik, Universitaet Innsbruck, Austria | Tel. +43-512-507-6207, Fax -2919 | preferred (permanent) e-mail: sanders at fs.tum.de From robert.kern at gmail.com Thu Jun 1 12:28:56 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 01 Jun 2006 11:28:56 -0500 Subject: [SciPy-user] triangulation error In-Reply-To: <447EF01D.1010802@uibk.ac.at> References: <1149155615.447eb91f1ce33@web-mail1.uibk.ac.at> <447EF01D.1010802@uibk.ac.at> Message-ID: <447F15C8.4000507@gmail.com> Simon Anders wrote: > Hi Martin, > > Martin Rutzinger schrieb: > >>has anybody experience with triangulating data with scipy? > > No, but I recently used Akima's surface interpolation algorithm, which > in turn uses Delaunay triangulation. For this purpose, I've written a > SWIG interface to wrap Akima's Fortan code for Python. > > Akima's code (ACM Collected Algorithms, http://www.acm.org/pubs/calgo/ , > Algorithm No. 761) uses Renka's Delauney triangulation code (called > TRIPACK, ACM CALGO, Nr. 772), and this all works well for me. > > So, if your problems persists, I can send you my code. I haven't looked > yet at NumPy's triangulation routine (where is it in the CVS tree?) but > I would be surprised if it does not also link to Renka's Fortran code, > and hence, I might be able to help getting this to work if necessary. It is in Lib/sandbox/delaunay/ of the scipy SVN repository. It uses Fortune's sweepline code because that actually has a usable license. The only license I could find for Renka's code was the generic ACM TOMS "no commercial use" license. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From R.Springuel at umit.maine.edu Thu Jun 1 16:02:20 2006 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Thu, 01 Jun 2006 16:02:20 -0400 Subject: [SciPy-user] module with physical constants In-Reply-To: References: Message-ID: <447F47CC.2070708@umit.maine.edu> Whoops. Linked the wrong file. Sorry. Robert has the correct link. -- R. Padraic Springuel Teaching Assistant Department of Physics and Astronomy University of Maine Bennett 214 Office Hours: Wednesday 3:00 - 4:00 pm; Friday 12:30 - 1:30 pm From R.Springuel at umit.maine.edu Thu Jun 1 16:25:52 2006 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Thu, 01 Jun 2006 16:25:52 -0400 Subject: [SciPy-user] module with physical constants In-Reply-To: References: Message-ID: <447F4D50.4010109@umit.maine.edu> I just updated the file to correct a mistake in the deuteron mass. Due to the nature of the file server, the update is in a different location. It can be downloaded here: http://www.umit.maine.edu/~r.springuel/000CCFE8-80000018/S28B2FA21.-1/Constants.py Instructions to delete the old version from the server have been given, but probably haven't been implemented yet (though you won't be able to see the file at the directory level). -- R. Padraic Springuel Teaching Assistant Department of Physics and Astronomy University of Maine Bennett 214 Office Hours: Wednesday 3:00 - 4:00 pm; Friday 12:30 - 1:30 pm From wegwerp at gmail.com Fri Jun 2 03:44:43 2006 From: wegwerp at gmail.com (weg werp) Date: Fri, 2 Jun 2006 09:44:43 +0200 Subject: [SciPy-user] fisrt version of physical constants module + codata Message-ID: <6f54c160606020044p7e22d422i8954eb2ebe2008ca@mail.gmail.com> Hi group, attached is the first version of what I had in mind: I updated Chucks version of the codata values (who used the 1998 values) with a fresh copy from NIST of the 2002 values. I parsed the ascii file from their website automatically to a dict with only some minor fudging, so all typos should be from NIST. Names and variables might have changed slightly, but the interface is kept the same. As a wrapper around the codata I made an easy-to-use module which just defines a bunch of floats and a few small functions. This also contains a lot of conversion constants between SI and other systems. It was fun digging for all the official definitions. Made me happy that I grew up in a metric country.... The list could probably be extended a little bit, so please list your favorite constants or suggestions for other names. I am biased towards physics, so propably some basic astronomy, biology or chemistry is missing. I am not going to include the complete periodic system though. Cheers, Bas -------------- next part -------------- A non-text attachment was scrubbed... Name: codata.py Type: text/x-python Size: 35008 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: constants.py Type: text/x-python Size: 3701 bytes Desc: not available URL: From oliphant.travis at ieee.org Fri Jun 2 03:53:18 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 02 Jun 2006 01:53:18 -0600 Subject: [SciPy-user] fisrt version of physical constants module + codata In-Reply-To: <6f54c160606020044p7e22d422i8954eb2ebe2008ca@mail.gmail.com> References: <6f54c160606020044p7e22d422i8954eb2ebe2008ca@mail.gmail.com> Message-ID: <447FEE6E.2020501@ieee.org> weg werp wrote: > Hi group, > > attached is the first version of what I had in mind: > I like it. I'll check it in to the sandbox. Can we get a clarification on the license. Can we distribute it under the SciPy license? -Travis From wegwerp at gmail.com Fri Jun 2 04:19:53 2006 From: wegwerp at gmail.com (weg werp) Date: Fri, 2 Jun 2006 10:19:53 +0200 Subject: [SciPy-user] fisrt version of physical constants module + codata In-Reply-To: <447FEE6E.2020501@ieee.org> References: <6f54c160606020044p7e22d422i8954eb2ebe2008ca@mail.gmail.com> <447FEE6E.2020501@ieee.org> Message-ID: <6f54c160606020119g48e7a22ap5b14241ad5a1c2ef@mail.gmail.com> Use whatever licence you are normally using for my stuff. Don't know about Chucks version. NIST website: "These World Wide Web pages are provided as a public service by the National Institute of Standards and Technology (NIST). With the exception of material marked as copyrighted, information presented on these pages is considered public information and may be distributed or copied. Use of appropriate byline/photo/image credits is requested.", so that should be ok. Bas From gruben at bigpond.net.au Fri Jun 2 11:08:53 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Sat, 03 Jun 2006 01:08:53 +1000 Subject: [SciPy-user] fisrt version of physical constants module + codata In-Reply-To: <6f54c160606020044p7e22d422i8954eb2ebe2008ca@mail.gmail.com> References: <6f54c160606020044p7e22d422i8954eb2ebe2008ca@mail.gmail.com> Message-ID: <44805485.6050305@bigpond.net.au> Typo: au = astronimical_unit Useful sources for more units: I think you should establish a naming convention for us, imperial, (metric?) values. eg. cup_us = 0.2365882365 cup = 0.25 This means changing gallon to gallon_us Extra units I'd like to see: length: fermi = 1e-15 power: dyne = 0.00001 erg = 0.0000001 binary prefixes: kibi = 2**10 mebi = 2**20 gibi = 2**30 tebi = 2**40 pebi = 2**50 exbi = 2**60 zebi = 2**70 yobi = 2**80 From wegwerp at gmail.com Fri Jun 2 12:50:14 2006 From: wegwerp at gmail.com (weg werp) Date: Fri, 2 Jun 2006 18:50:14 +0200 Subject: [SciPy-user] fisrt version of physical constants module + codata In-Reply-To: <44805485.6050305@bigpond.net.au> References: <6f54c160606020044p7e22d422i8954eb2ebe2008ca@mail.gmail.com> <44805485.6050305@bigpond.net.au> Message-ID: <6f54c160606020950m19f759a0td59da5af9c5849b4@mail.gmail.com> > I think you should establish a naming convention for us, imperial, > (metric?) values. eg. > cup_us = 0.2365882365 > cup = 0.25 > This means changing gallon to gallon_us My convention is to only use the most common version without any suffix and only mark the lesser used variants (if they are used at all). If you look back long enough every city had their own foot. It is bad enough that we still have those imperial versions hanging around in the first place, so lets not encourage any nonstandard variants of forbidden units. I am also trying to restrict it to units used in science/engineering, unless someone is writing a Scipy powered 'tablespoon to teaspoon' converter.... > Extra units I'd like to see: Thanks, I will include those in the next version. Bas From robert.kern at gmail.com Fri Jun 2 14:25:37 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 02 Jun 2006 13:25:37 -0500 Subject: [SciPy-user] fisrt version of physical constants module + codata In-Reply-To: <6f54c160606020044p7e22d422i8954eb2ebe2008ca@mail.gmail.com> References: <6f54c160606020044p7e22d422i8954eb2ebe2008ca@mail.gmail.com> Message-ID: <448082A1.8050402@gmail.com> weg werp wrote: > As a wrapper around the codata I made an easy-to-use module which just > defines a bunch of floats and a few small functions. This also > contains a lot of conversion constants between SI and other systems. > It was fun digging for all the official definitions. Made me happy > that I grew up in a metric country.... IMO, *the* units conversion package is Frink (though it's not Python). Grab unit conversion values from its data file: http://futureboy.homeip.net/frinkdata/units.txt Any units package worth its salt should be able to replicate Frink's sample calculations: http://futureboy.homeip.net/frinkdocs/#SampleCalculations -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Fri Jun 2 15:22:22 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 2 Jun 2006 15:22:22 -0400 Subject: [SciPy-user] fisrt version of physical constants module + codata In-Reply-To: <6f54c160606020950m19f759a0td59da5af9c5849b4@mail.gmail.com> References: <6f54c160606020044p7e22d422i8954eb2ebe2008ca@mail.gmail.com> <44805485.6050305@bigpond.net.au> <6f54c160606020950m19f759a0td59da5af9c5849b4@mail.gmail.com> Message-ID: <20060602152222.60611221@arbutus.physics.mcmaster.ca> On Fri, 2 Jun 2006 18:50:14 +0200 "weg werp" wrote: > > I think you should establish a naming convention for us, imperial, > > (metric?) values. eg. > > cup_us = 0.2365882365 > > cup = 0.25 > > This means changing gallon to gallon_us > My convention is to only use the most common version without any > suffix and only mark the lesser used variants (if they are used at > all). I'm -1 on suffix/non-suffix for similarly named units. I'd like suffices on all the variants, eg. gallon_us and gallon_uk (or gallon_imp). gallon_us is the lesser-used one if you're in the UK. And a pint here in Canada is the UK one, not the American one (you get more beer here :-) Plus, there's things like ounces, which have several different meanings: fluid ounce (Imperial and US), ounce-force, avoirdupois ounce, troy ounce, etc. See http://en.wikipedia.org/wiki/Ounce As some who grew up with metric, and in a country where the Imperial units used to be more common, I can't tell you which is "the most common version" for your definitions. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From agree74 at hotmail.com Fri Jun 2 16:48:43 2006 From: agree74 at hotmail.com (A. Rachel Grinberg) Date: Fri, 02 Jun 2006 13:48:43 -0700 Subject: [SciPy-user] Python - Matlab least squares difference Message-ID: Hi, I noticed a difference between the linear least square solutions in Python and Matlab. I understand that if the system is underdetermined the solution is not going to be unique, nevertheless I would like figure out the algorithm Matlab is using, since it seems "better" to me. For example, let's say I have a Matrix 2 1 0 A = 1 1 1 and b = (0,1)' While Matlab yields (0,0,1)' as the solution to A\b, scipy's result for linalg.linear_least_squares(A,b) is array([[-0.16666667], [ 0.33333333], [ 0.83333333]]) Any ideas, how Matlab's A\b is implemented? Thank you, Rachel _________________________________________________________________ On the road to retirement? Check out MSN Life Events for advice on how to get there! http://lifeevents.msn.com/category.aspx?cid=Retirement From robert.kern at gmail.com Fri Jun 2 17:06:07 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 02 Jun 2006 16:06:07 -0500 Subject: [SciPy-user] Python - Matlab least squares difference In-Reply-To: References: Message-ID: <4480A83F.6050405@gmail.com> A. Rachel Grinberg wrote: > Hi, > > I noticed a difference between the linear least square solutions in Python > and Matlab. I understand that if the system is underdetermined the solution > is not going to be unique, nevertheless I would like figure out the > algorithm Matlab is using, since it seems "better" to me. For example, let's > say I have a Matrix > 2 1 0 > A = 1 1 1 and b = (0,1)' > > While Matlab yields (0,0,1)' as the solution to A\b, scipy's result for > linalg.linear_least_squares(A,b) is > > array([[-0.16666667], > [ 0.33333333], > [ 0.83333333]]) > > Any ideas, how Matlab's A\b is implemented? Not sure, but it's wrong (more or less). In underdetermined linear least squares problems, it is conventional to choose the solution that has minimum L2-norm. >>> A = array([[2., 1, 0], [1, 1, 1]]) >>> A array([[ 2., 1., 0.], [ 1., 1., 1.]]) >>> b = array([0., 1.]) >>> linalg.lstsq(A, b) (array([-0.16666667, 0.33333333, 0.83333333]), array([], dtype=float64), 2, array([ 2.6762432 , 0.91527173])) >>> x = _[0] >>> dot(x, x) 0.83333333333333426 >>> dot([0., 0, 1], [0., 0, 1]) 1.0 Implementations that do something else have some 'splainin' to do. Of course, A\b may not quite be "do linear least squares" in Matlab but something else. I don't know. FWIW, Octave gives me the same answer as numpy/scipy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fullung at gmail.com Fri Jun 2 20:42:32 2006 From: fullung at gmail.com (Albert Strasheim) Date: Sat, 3 Jun 2006 02:42:32 +0200 Subject: [SciPy-user] Python - Matlab least squares difference In-Reply-To: <4480A83F.6050405@gmail.com> Message-ID: <032201c686a6$99754390$01eaa8c0@dsp.sun.ac.za> Hello all > Implementations that do something else have some 'splainin' to do. Of > course, > A\b may not quite be "do linear least squares" in Matlab but something > else. I > don't know. FWIW, Octave gives me the same answer as numpy/scipy. http://www.mathworks.com/access/helpdesk/help/techdoc/ref/mldivide.html Scroll down to the Algorithm section for details. Regards, Albert From pajer at iname.com Fri Jun 2 21:41:28 2006 From: pajer at iname.com (Gary) Date: Fri, 02 Jun 2006 21:41:28 -0400 Subject: [SciPy-user] fisrt version of physical constants module + codata In-Reply-To: <448082A1.8050402@gmail.com> References: <6f54c160606020044p7e22d422i8954eb2ebe2008ca@mail.gmail.com> <448082A1.8050402@gmail.com> Message-ID: <4480E8C8.40402@iname.com> Robert Kern wrote: >weg werp wrote: > > > >>As a wrapper around the codata I made an easy-to-use module which just >>defines a bunch of floats and a few small functions. This also >>contains a lot of conversion constants between SI and other systems. >>It was fun digging for all the official definitions. Made me happy >>that I grew up in a metric country.... >> >> > >IMO, *the* units conversion package is Frink (though it's not Python). Grab unit >conversion values from its data file: > > http://futureboy.homeip.net/frinkdata/units.txt > >Any units package worth its salt should be able to replicate Frink's sample >calculations: > > http://futureboy.homeip.net/frinkdocs/#SampleCalculations > > A lot of the Frink stuff is ridiculous. (OT: his editorializing grates, c.f. his ignorant comments on the candela). The webi, mebi, yaco, whacko, stuff doesn't take much work, I guess, but has anyone *ever* used them? (I *once* saw yacto in a published work). But again, it doesn't take much energy to include. From scipy at mspacek.mm.st Sat Jun 3 15:06:49 2006 From: scipy at mspacek.mm.st (Martin Spacek) Date: Sat, 03 Jun 2006 12:06:49 -0700 Subject: [SciPy-user] Sample and hold Message-ID: <4481DDC9.1020207@mspacek.mm.st> Hi, I've got an array of irregularly spaced data points, like this: * * * * * * * * I'd like to resample them so that the signal between points is simply the value at the previous point, like this: **** ***** ***** **** * ********** ***************** **** I'd like the output to have regularly spaced data points. This means that the exact times of the original data points will be lost, but that's OK. I've searched scipy.signal in vain (I'm not even sure what to call this procedure, though I probably should). What's the best way to do this in scipy? Thanks, -- Martin Spacek PhD student, Graduate Program in Neuroscience Dept. of Ophthalmology and Visual Sciences University of British Columbia, Vancouver, BC, Canada http://swindale.ecc.ubc.ca From scipy at mspacek.mm.st Sat Jun 3 15:12:42 2006 From: scipy at mspacek.mm.st (Martin Spacek) Date: Sat, 03 Jun 2006 12:12:42 -0700 Subject: [SciPy-user] Sample and hold In-Reply-To: <4481DDC9.1020207@mspacek.mm.st> References: <4481DDC9.1020207@mspacek.mm.st> Message-ID: <4481DF2A.3020009@mspacek.mm.st> Martin Spacek wrote: > Hi, > > I've got an array of irregularly spaced data points, like this: > > * * > * > * * > * * > * To be more accurate, I've actually got two arrays of the same length. One to hold the values, the other to hold the timepoints. Cheers, Martin From robert.kern at gmail.com Sat Jun 3 15:40:44 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 03 Jun 2006 14:40:44 -0500 Subject: [SciPy-user] Sample and hold In-Reply-To: <4481DF2A.3020009@mspacek.mm.st> References: <4481DDC9.1020207@mspacek.mm.st> <4481DF2A.3020009@mspacek.mm.st> Message-ID: <4481E5BC.9070001@gmail.com> Martin Spacek wrote: > Martin Spacek wrote: > >>Hi, >> >>I've got an array of irregularly spaced data points, like this: >> >> * * >> * >> * * >>* * >> * > > To be more accurate, I've actually got two arrays of the same length. > One to hold the values, the other to hold the timepoints. Okay, let's suppose that these arrays are y and x respectively and that x is already sorted. # Untested! import numpy as np def sample_and_hold(x, y, newx): idx = np.searchsorted(x, newx) - 1 # Handle the cases where newx is smaller than the first point. idx = np.where(idx < 0, 0, idx) return y[idx] -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jdhunter at ace.bsd.uchicago.edu Sat Jun 3 15:35:38 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Sat, 03 Jun 2006 14:35:38 -0500 Subject: [SciPy-user] Sample and hold In-Reply-To: <4481DF2A.3020009@mspacek.mm.st> (Martin Spacek's message of "Sat, 03 Jun 2006 12:12:42 -0700") References: <4481DDC9.1020207@mspacek.mm.st> <4481DF2A.3020009@mspacek.mm.st> Message-ID: <87mzcuf2wl.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Martin" == Martin Spacek writes: Martin> Martin Spacek wrote: >> Hi, >> >> I've got an array of irregularly spaced data points, like this: >> >> * * * * * * * * Martin> To be more accurate, I've actually got two arrays of the Martin> same length. One to hold the values, the other to hold Martin> the timepoints. I learned from my matlab days that cumsum is a useful workhorse when trying to code things up w/o loops. This example takes the first diff of the sample data and inserts into the resampled vector at the right points, assuming your time data is sorted. When you do a cumsum, the zeros of the target vector contribute nothing and the diff is summed up to recover the original signal between sample points. import numpy as n # some random sample data N = 20 t = n.cumsum(n.rand(N)) x = n.rand(N) dx = n.zeros(x.shape[0], n.Float) # create a first diff vector so that our cumsum trick can # be used below dx[0] = x[0] dx[1:] = n.diff(x) # resampled vectors dt = 0.01 ts = n.arange(0, max(t)+dt, dt) xs = n.zeros( ts.shape, n.Float) # find out where to insert our (diffed) sample points # into the target vector using search sorted ind = n.searchsorted(ts,t) xs[ind] = dx xs = n.cumsum(xs) from pylab import figure, show fig = figure() ax1 = fig.add_subplot(111) ax1.plot(t, x, 'o') ax1.plot(ts, xs, '-') show() From jdhunter at ace.bsd.uchicago.edu Sat Jun 3 15:41:39 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Sat, 03 Jun 2006 14:41:39 -0500 Subject: [SciPy-user] Sample and hold In-Reply-To: <4481E5BC.9070001@gmail.com> (Robert Kern's message of "Sat, 03 Jun 2006 14:40:44 -0500") References: <4481DDC9.1020207@mspacek.mm.st> <4481DF2A.3020009@mspacek.mm.st> <4481E5BC.9070001@gmail.com> Message-ID: <87verido24.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Robert" == Robert Kern writes: Robert> def sample_and_hold(x, y, newx): idx = np.searchsorted(x, Robert> newx) - 1 Oh yes, much better that mine! JDH From rclewley at cam.cornell.edu Sat Jun 3 20:21:08 2006 From: rclewley at cam.cornell.edu (Robert Clewley) Date: Sat, 3 Jun 2006 20:21:08 -0400 (EDT) Subject: [SciPy-user] Sample and hold In-Reply-To: <4481DDC9.1020207@mspacek.mm.st> References: <4481DDC9.1020207@mspacek.mm.st> Message-ID: Hi Martin, On Sat, 3 Jun 2006, Martin Spacek wrote: > > I'd like the output to have regularly spaced data points. This means > that the exact times of the original data points will be lost, but > that's OK. FWIW, you might be interested in a slightly more sophisticated solution: abstracting the data and treating it like a true curve. You can have arbitrary access to the curve for convenient resampling at your desired interval length. For this you could use the interp0d class that I adapted from scipy's interp1d. You could also easily modify it to use the input points as the left or right-hand side of the constant intervals (mine uses them as right points, and averages the y values from the interval endpoints). Naturally, this solution involves more overhead than the methods previously posted. Below is the modified __call__ method of interp1d (from scipy 0.3.2) that you need to create interp0d. Alternatively, the whole thing (including a curve class over these interpolators) can be found in the PyDSTool package. Note that the original data points are retained inside the class. A demo of it working with independent time and data arrays in this package also appears below. -Rob def __call__(self,x_new): """Find piecewise-constant interpolated y_new = (x_new). Inputs: x_new -- New independent variables. Outputs: y_new -- Piecewise-constant interpolated values corresponding to x_new. """ # 1. Handle values in x_new that are outside of x. Throw error, # or return a list of mask array indicating the outofbounds values. # The behavior is set by the bounds_error variable. x_new_1d = atleast_1d(x_new) out_of_bounds = self._check_bounds(x_new_1d) # 2. Find where in the orignal data, the values to interpolate # would be inserted. # Note: If x_new[n] = x[m], then m is returned by searchsorted. x_new_indices = searchsorted(self.x,x_new_1d) # 3. Clip x_new_indices so that they are within the range of # self.x indices and at least 1. Removes mis-interpolation # of x_new[n] = x[0] x_new_indices = clip(x_new_indices,1,len(self.x)-1).astype(Int) # 4. Calculate the region that each x_new value falls in. lo = x_new_indices - 1; hi = x_new_indices # !! take() should default to the last axis (IMHO) and remove # !! the extra argument. # 5. Calculate the actual value for each entry in x_new. y_lo = take(self.y,lo,axis=self.interp_axis) y_hi = take(self.y,hi,axis=self.interp_axis) y_new = (y_lo+y_hi)/2. # 6. Fill any values that were out of bounds with NaN # !! Need to think about how to do this efficiently for # !! mutli-dimensional Cases. yshape = y_new.shape y_new = y_new.flat new_shape = list(yshape) new_shape[self.interp_axis] = 1 sec_shape = [1]*len(new_shape) sec_shape[self.interp_axis] = len(out_of_bounds) out_of_bounds.shape = sec_shape new_out = ones(new_shape)*out_of_bounds putmask(y_new, new_out.flat, self.fill_value) y_new.shape = yshape # Rotate the values of y_new back so that they correspond to the # correct x_new values. result = swapaxes(y_new,self.interp_axis,self.axis) try: len(x_new) return result except TypeError: return result[0] return result Example of use in PyDSTool, using the InterpolateTable class: >> timeData = linspace(0, 10, 30) >> sindata = sin(timeData) >> xData = {'sinx': sindata} >> sin_pcwc_interp = InterpolateTable({'tdata': timeData, 'ics': xData, 'name': 'interp0d', 'method': 'constant', 'checklevel': 1, 'abseps': 1e-5 }).compute('sin_interp') >> sin_pcwc_interp(2*pi) # should be close to zero! 0.094554071417120258 >> t = linspace(0,10,300) # new time array, much finer resolution >> y = sin_pcwc_interp(t) # new data (as pointset, so use array(y) to get array) From david.grant at telus.net Sun Jun 4 06:09:48 2006 From: david.grant at telus.net (David Grant) Date: Sun, 04 Jun 2006 03:09:48 -0700 Subject: [SciPy-user] ragged arrays Message-ID: <200606040309.48941.david.grant@telus.net> Simple question, in Matlab you can do ragged arrays using "cell arrays." Is the equivalent thing in scipy/numpy/python just a List of arrays? I made one here: In [55]:a Out[55]: [array([0, 0, 0, 0, 0]), array([0, 0, 0, 0, 0, 0]), array([0, 0, 0, 0]), array([1, 1, 1, 1])] From jdhunter at ace.bsd.uchicago.edu Sun Jun 4 17:51:04 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Sun, 04 Jun 2006 16:51:04 -0500 Subject: [SciPy-user] ragged arrays In-Reply-To: <200606040309.48941.david.grant@telus.net> (David Grant's message of "Sun, 04 Jun 2006 03:09:48 -0700") References: <200606040309.48941.david.grant@telus.net> Message-ID: <87r724pp2v.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "David" == David Grant writes: David> Simple question, in Matlab you can do ragged arrays using David> "cell arrays." Is the equivalent thing in David> scipy/numpy/python just a List of arrays? I made one here: David> In [55]:a Out[55]: [array([0, 0, 0, 0, 0]), array([0, 0, 0, David> 0, 0, 0]), array([0, 0, 0, 0]), array([1, 1, 1, 1])] Oh man, that brings back a dreadful past -- cell arrays. The built-in python data structures are much richer. You can use a list or a dictionary to store them, or a class that derives from one of these or a custom class. One good solution is to derive a custom class from list with helper methods to operate over the list of arrays. Eg, if you needed to find the max element over all of your ragged arrays, you could do something like class ragged(list): def max(self): return max([max(x) for x in self]) import numpy as n n1 = n.rand(10) n2 = n.rand(20) n3 = n.rand(30) r = ragged((n1, n2, n3)) print r.max() n4 = n.rand(12) r.append(n4) print r.max() JDH From david.grant at telus.net Sun Jun 4 18:39:29 2006 From: david.grant at telus.net (David Grant) Date: Sun, 04 Jun 2006 15:39:29 -0700 Subject: [SciPy-user] ragged arrays In-Reply-To: <87r724pp2v.fsf@peds-pc311.bsd.uchicago.edu> References: <200606040309.48941.david.grant@telus.net> <87r724pp2v.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <200606041539.29416.david.grant@telus.net> On Sunday 04 June 2006 14:51, John Hunter wrote: > >>>>> "David" == David Grant writes: > > David> Simple question, in Matlab you can do ragged arrays using > David> "cell arrays." Is the equivalent thing in > David> scipy/numpy/python just a List of arrays? I made one here: > > David> In [55]:a Out[55]: [array([0, 0, 0, 0, 0]), array([0, 0, 0, > David> 0, 0, 0]), array([0, 0, 0, 0]), array([1, 1, 1, 1])] > > Oh man, that brings back a dreadful past -- cell arrays. The built-in > python data structures are much richer. You can use a list or a > dictionary to store them, or a class that derives from one of these or > a custom class. > > One good solution is to derive a custom class from list with helper > methods to operate over the list of arrays. Eg, if you needed to find > the max element over all of your ragged arrays, you could do something > like > > class ragged(list): > def max(self): > return max([max(x) for x in self]) > > import numpy as n > n1 = n.rand(10) > n2 = n.rand(20) > n3 = n.rand(30) > > r = ragged((n1, n2, n3)) > print r.max() > > n4 = n.rand(12) > r.append(n4) > print r.max() That is really cool. Thanks. I was going to do the exact same thing except I was just going to implement max as a static method that operated on a list. This is much better. And using the list comprehension thing makes it so compact as well. Dave From david.grant at telus.net Sun Jun 4 19:34:06 2006 From: david.grant at telus.net (David Grant) Date: Sun, 04 Jun 2006 16:34:06 -0700 Subject: [SciPy-user] Vector indexing Message-ID: <200606041634.07727.david.grant@telus.net> Vector indexing question: a=rand(10,10) 1) a[0:3, 0:3] #gives me the first 3 rows and columns 2) a[range(2),range(2)] gives me a one-dimensional array of elements [0,0], [1,1], [2,2] How do I do 1) using some on-the-fly generated array? Thanks, David From david.grant at telus.net Sun Jun 4 20:03:49 2006 From: david.grant at telus.net (David Grant) Date: Sun, 04 Jun 2006 17:03:49 -0700 Subject: [SciPy-user] Vector indexing In-Reply-To: <200606041634.07727.david.grant@telus.net> References: <200606041634.07727.david.grant@telus.net> Message-ID: <200606041703.49924.david.grant@telus.net> On Sunday 04 June 2006 16:34, David Grant wrote: > Vector indexing question: > > a=rand(10,10) > > 1) a[0:3, 0:3] #gives me the first 3 rows and columns > > 2) a[range(2),range(2)] gives me a one-dimensional array of elements [0,0], > [1,1], [2,2] > > How do I do 1) using some on-the-fly generated array? I figured out one way to do it: take(take(a,x,0),y,1) It looks like take is in numpy.core.oldnumeric, so is there are new way to do it? I am very used to Matlab, unfortunately.... Dave From simon at arrowtheory.com Sun Jun 4 20:13:56 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon, 5 Jun 2006 10:13:56 +1000 Subject: [SciPy-user] Vector indexing In-Reply-To: <200606041703.49924.david.grant@telus.net> References: <200606041634.07727.david.grant@telus.net> <200606041703.49924.david.grant@telus.net> Message-ID: <20060605101356.3b9d1dec.simon@arrowtheory.com> On Sun, 04 Jun 2006 17:03:49 -0700 David Grant wrote: > > On Sunday 04 June 2006 16:34, David Grant wrote: > > Vector indexing question: > > > > a=rand(10,10) > > > > 1) a[0:3, 0:3] #gives me the first 3 rows and columns > > > > 2) a[range(2),range(2)] gives me a one-dimensional array of elements [0,0], > > [1,1], [2,2] > > > > How do I do 1) using some on-the-fly generated array? > > I figured out one way to do it: > > take(take(a,x,0),y,1) a[x,:][:,y] ? Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From robert.kern at gmail.com Sun Jun 4 20:17:39 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 04 Jun 2006 19:17:39 -0500 Subject: [SciPy-user] Vector indexing In-Reply-To: <200606041634.07727.david.grant@telus.net> References: <200606041634.07727.david.grant@telus.net> Message-ID: <44837823.7040703@gmail.com> David Grant wrote: > Vector indexing question: > > a=rand(10,10) > > 1) a[0:3, 0:3] #gives me the first 3 rows and columns > > 2) a[range(2),range(2)] gives me a one-dimensional array of elements [0,0], > [1,1], [2,2] Actually, it gets you array([a[0,0], a[1,1]]). > How do I do 1) using some on-the-fly generated array? Slice objects. >>> a[slice(0, 3), slice(0, 3)] array([[ 0.83840663, 0.36944056, 0.48230632], [ 0.04508558, 0.25772124, 0.62787961], [ 0.0455162 , 0.69427227, 0.26374691]]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sun Jun 4 20:18:57 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 04 Jun 2006 19:18:57 -0500 Subject: [SciPy-user] Vector indexing In-Reply-To: <200606041634.07727.david.grant@telus.net> References: <200606041634.07727.david.grant@telus.net> Message-ID: <44837871.8050101@gmail.com> David Grant wrote: > Vector indexing question: > > a=rand(10,10) > > 1) a[0:3, 0:3] #gives me the first 3 rows and columns > > 2) a[range(2),range(2)] gives me a one-dimensional array of elements [0,0], > [1,1], [2,2] > > How do I do 1) using some on-the-fly generated array? Never mind. Simon has what you want. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david.grant at telus.net Mon Jun 5 01:08:38 2006 From: david.grant at telus.net (David Grant) Date: Sun, 04 Jun 2006 22:08:38 -0700 Subject: [SciPy-user] Vector indexing In-Reply-To: <44837823.7040703@gmail.com> References: <200606041634.07727.david.grant@telus.net> <44837823.7040703@gmail.com> Message-ID: <200606042208.39077.david.grant@telus.net> On Sunday 04 June 2006 17:17, Robert Kern wrote: > David Grant wrote: > > Vector indexing question: > > > > a=rand(10,10) > > > > 1) a[0:3, 0:3] #gives me the first 3 rows and columns > > > > 2) a[range(2),range(2)] gives me a one-dimensional array of elements > > [0,0], [1,1], [2,2] > > Actually, it gets you array([a[0,0], a[1,1]]). There I go again with my 1-indexed arrays...thanks Matlab. ;-) > > How do I do 1) using some on-the-fly generated array? > > Slice objects. > > >>> a[slice(0, 3), slice(0, 3)] > > array([[ 0.83840663, 0.36944056, 0.48230632], > [ 0.04508558, 0.25772124, 0.62787961], > [ 0.0455162 , 0.69427227, 0.26374691]]) But let's say that you didn't want a slice, but you wanted to grab some random rows and columns. Simon's solution seems to do the trick: a[x,:][:,y] which I guess is just performing indexing on it twice. Dave From pau.gargallo at gmail.com Mon Jun 5 09:51:29 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Mon, 5 Jun 2006 15:51:29 +0200 Subject: [SciPy-user] Vector indexing In-Reply-To: <200606042208.39077.david.grant@telus.net> References: <200606041634.07727.david.grant@telus.net> <44837823.7040703@gmail.com> <200606042208.39077.david.grant@telus.net> Message-ID: <6ef8f3380606050651j6b930ee5i4e8017169b0f4b4c@mail.gmail.com> On 6/5/06, David Grant wrote: > On Sunday 04 June 2006 17:17, Robert Kern wrote: > > David Grant wrote: > > > Vector indexing question: > > > > > > a=rand(10,10) > > > > > > 1) a[0:3, 0:3] #gives me the first 3 rows and columns > > > > > > 2) a[range(2),range(2)] gives me a one-dimensional array of elements > > > [0,0], [1,1], [2,2] > > > > Actually, it gets you array([a[0,0], a[1,1]]). > > There I go again with my 1-indexed arrays...thanks Matlab. ;-) > > > > How do I do 1) using some on-the-fly generated array? > > > > Slice objects. > > > > >>> a[slice(0, 3), slice(0, 3)] > > > > array([[ 0.83840663, 0.36944056, 0.48230632], > > [ 0.04508558, 0.25772124, 0.62787961], > > [ 0.0455162 , 0.69427227, 0.26374691]]) > > But let's say that you didn't want a slice, but you wanted to grab some random > rows and columns. Simon's solution seems to do the trick: a[x,:][:,y] which I > guess is just performing indexing on it twice. > > Dave > a[ ix_(x,y) ] also works. ix_ is useful when the number of dimensions is unknown at coding time. pau From solkaa at gmail.com Mon Jun 5 10:01:07 2006 From: solkaa at gmail.com (Sergey Dolgov) Date: Mon, 5 Jun 2006 18:01:07 +0400 Subject: [SciPy-user] numpy.test() segfaults with numpy 0.9.8 Message-ID: <532c966c0606050701r498492f5v271c162dc1bee5aa@mail.gmail.com> Hi, I've just built numpy 0.9.8 on a Debian box (i386, unstable), and it fails to pass its tests (see below). BTW, all tests ran fine with previous version 0.9.6 (but it seems like the failing one was introduced only in 0.9.8). -- Sergey In [3]: import numpy In [4]: numpy.test(verbosity=10) Found 5 tests for numpy.distutils.misc_util Warning: No test file found in /usr/local/lib/python2.4/site-packages/numpy/tests for module ......................snip...................................... test_array_with_context (numpy.core.tests.test_umath.test_special_methods) ... ok test_failing_wrap (numpy.core.tests.test_umath.test_special_methods) ... ok test_old_wrap (numpy.core.tests.test_umath.test_special_methods) ... ok test_priority (numpy.core.tests.test_umath.test_special_methods) ... ok test_wrap (numpy.core.tests.test_umath.test_special_methods) ... ok check_types (numpy.core.tests.test_scalarmath.test_types)Segmentation fault From SYoung at LEGGMASONCANADA.com Mon Jun 5 11:48:51 2006 From: SYoung at LEGGMASONCANADA.com (Steven Young) Date: Mon, 5 Jun 2006 11:48:51 -0400 Subject: [SciPy-user] brute() in optimize.py Message-ID: Hi, I'm quite new to using SciPy (and NumPy), and was wondering if I could get some help on using brute(). I've been able to use some of the other optimization functions but would like to reconcile differing values using brute force. I'm trying to optimize a function taking a single scalar argument. If I could get an example of how to use brute() or any background on how to use it, that would be great. Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Mon Jun 5 13:02:51 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 05 Jun 2006 11:02:51 -0600 Subject: [SciPy-user] numpy.test() segfaults with numpy 0.9.8 In-Reply-To: <532c966c0606050701r498492f5v271c162dc1bee5aa@mail.gmail.com> References: <532c966c0606050701r498492f5v271c162dc1bee5aa@mail.gmail.com> Message-ID: <448463BB.3000709@ee.byu.edu> Sergey Dolgov wrote: >Hi, I've just built numpy 0.9.8 on a Debian box (i386, unstable), and >it fails to pass its tests (see below). BTW, all tests ran fine with >previous version 0.9.6 (but it seems like the failing one was >introduced only in 0.9.8). > > It's a problem with Python itself. It is known about and has been fixed in Python 2.5 and worked around in SVN version of NumPy. Most code will not excerise where the problem is (using the array scalar constructor for complex types directly), so I wouldn't worry about it at this point. Best, -Travis From joseph.a.crider at Boeing.com Mon Jun 5 14:09:43 2006 From: joseph.a.crider at Boeing.com (Crider, Joseph A) Date: Mon, 5 Jun 2006 13:09:43 -0500 Subject: [SciPy-user] SciPy with Sun Performance Library Message-ID: Is anyone using the Sun Performance Library (http://developers.sun.com/prodtech/cc/perflib_index.html) as a substitute for BLAS and LAPACK with SciPy? I have access to this library and would like to use it, especially as I've not been very successful getting ATLAS to compile on our configuration. Thanks. J. Allen Crider (256)461-2699 From rclewley at cam.cornell.edu Mon Jun 5 15:13:20 2006 From: rclewley at cam.cornell.edu (Robert Clewley) Date: Mon, 5 Jun 2006 15:13:20 -0400 (EDT) Subject: [SciPy-user] ANN: PyDSTool v0.83.2 with AUTO support Message-ID: Dear SciPy users, We'd like to announce the release of version 0.83.2 of PyDSTool, for use in conjunction with SciPy 0.3.2 ("old" SciPy) to perform dynamical systems simulation and analysis. The most significant feature of the new version is the inclusion of a low-level interface to the AUTO continuation package, which allows access to routines for periodic orbits. Our simple user interface to AUTO follows the existing format used in the PyCont sub-package, but provides access to all of the algorithmic parameters associated with periodic orbit continuation. The graphical output capabilities of PyCont have also been greatly improved from v0.83.1. Demos of these features are included in the download, and the PyCont wiki documentation will be expanded and brought up to date over the next few days. Other improvements / bug fixes in the new version, as well as documentation and a tutorial, can be found on the wiki: http://pydstool.sourceforge.net Download directly from http://sourceforge.net/projects/pydstool Your continuing feedback and code contributions are greatly appreciated. Please help us improve this project! Thanks for your attention, Rob Clewley, Erik Sherwood, Drew LaMar, Dept. of Mathematics and Center for Applied Mathematics, Cornell University. From nvf at MIT.EDU Tue Jun 6 01:32:34 2006 From: nvf at MIT.EDU (Nick Fotopoulos) Date: Tue, 06 Jun 2006 01:32:34 -0400 Subject: [SciPy-user] Trac Message-ID: <44851372.4080801@mit.edu> Dear admins, I've forgotten my Trac username/password. Is there any way to have my username revealed and my password reset for me? I didn't see such a utility on the website. I'd like to update the enhancements I submitted for scipy.io.loadmat (fixed a byte-order bug in the enhancement). With amazing timing, I wore my SPAMIT (Stupid People at MIT) shirt today. Thanks, Nick From giovanni.samaey at cs.kuleuven.ac.be Tue Jun 6 07:46:13 2006 From: giovanni.samaey at cs.kuleuven.ac.be (Giovanni Samaey) Date: Tue, 06 Jun 2006 13:46:13 +0200 Subject: [SciPy-user] scipy, data storage and mpi Message-ID: <44856B05.20508@cs.kuleuven.ac.be> Hi, in our department, we are currently examining the possibility of disencouraging the use of matlab in favor of python + scipy + matplotlib -- the advantages being the open character, the programming language, and the possibilities for parallel computing. We are currently looking for an MPI python package and a data storage package that can work with numpy arrays, as well as with each other? Which of the MPI choices are known to work with numpy arrays, or are known to work with them in the near future? Data storage seems well developed in the pyTables package? Does this allow parallel writing, and if so, does it matter which MPI python package we use? I should probably ask some of these questions on more specific mailing list, but I was hoping that some people here would have a more general view... Best, and thanks in advance for any comments, Giovanni Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From solkaa at gmail.com Tue Jun 6 09:09:12 2006 From: solkaa at gmail.com (Sergey Dolgov) Date: Tue, 6 Jun 2006 17:09:12 +0400 Subject: [SciPy-user] numpy.test() segfaults with numpy 0.9.8 In-Reply-To: <448463BB.3000709@ee.byu.edu> References: <532c966c0606050701r498492f5v271c162dc1bee5aa@mail.gmail.com> <448463BB.3000709@ee.byu.edu> Message-ID: <532c966c0606060609g63771729i13f9e29d4375b8b9@mail.gmail.com> On 6/5/06, Travis Oliphant wrote: > Sergey Dolgov wrote: > > >Hi, I've just built numpy 0.9.8 on a Debian box (i386, unstable), and > >it fails to pass its tests (see below). BTW, all tests ran fine with > >previous version 0.9.6 (but it seems like the failing one was > >introduced only in 0.9.8). > > > > > > It's a problem with Python itself. It is known about and has been fixed > in Python 2.5 and worked around in SVN version of NumPy. Most code will > not excerise where the problem is (using the array scalar constructor > for complex types directly), so I wouldn't worry about it at this point. > Thanks for your help, Travis. I've tried it with debian python2.5, and the segfault is still there -- apparently debian's snapshot (20060409) doesn't include the fix yet. I won't worry much anyway, as you suggest. -- Sergey From robert.kern at gmail.com Tue Jun 6 11:11:46 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 06 Jun 2006 10:11:46 -0500 Subject: [SciPy-user] Trac In-Reply-To: <44851372.4080801@mit.edu> References: <44851372.4080801@mit.edu> Message-ID: <44859B32.1030302@gmail.com> Nick Fotopoulos wrote: > Dear admins, > > I've forgotten my Trac username/password. Is there any way to have my > username revealed and my password reset for me? I didn't see such a > utility on the website. I'd like to update the enhancements I submitted > for scipy.io.loadmat (fixed a byte-order bug in the enhancement). I don't see any username that is obviously yours. You can always make another account. I can take care of anything else offlist. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From massimo.sandal at unibo.it Tue Jun 6 14:20:21 2006 From: massimo.sandal at unibo.it (massimo sandal) Date: Tue, 06 Jun 2006 20:20:21 +0200 Subject: [SciPy-user] linear_least_squares: OverFlow error or flapack.error Message-ID: <4485C765.7020804@unibo.it> Hi, I'm trying to do a simple linear least squares fit of some data in an application. The relevant code runs about as follows, following closely the example found on http://mail.python.org/pipermail/python-list/2006-March/331693.html --------- import matplotlib.numerix as nx contact_x_points=nx.array(x_points[left_bound:right_bound]) contact_y_points=nx.array(y_points[left_bound:right_bound]) A=nx.ones((len(contact_x_points),2)) A[:,0]=contact_x_points result=nx.linear_algebra.linear_least_squares(A,contact_y_points) --------- ...but when I run, it crashes with: File "hooke.py", line 202, in find_contact_point result=nx.linear_algebra.linear_least_squares(A,contact_y_points) File "/usr/lib/python2.3/site-packages/Numeric/LinearAlgebra.py", line 416, in linear_least_squares nlvl = max( 0, int( math.log( float(min( m,n ))/2. ) ) + 1 ) OverflowError: math range error I also tried using scipy: ----------- import scipy as sp contact_x_points=sp.array(x_points[left_bound:right_bound]) contact_y_points=sp.array(y_points[left_bound:right_bound]) A=sp.ones((len(contact_x_points),2)) A[:,0]=contact_x_points result=sp.linalg.lstsq(A,contact_y_points) ------------- ... with another error: array_from_pyobj:intent(hide) must have defined dimensions. rank=1 dimensions=[ 0 ] Traceback: [...] File "hooke.py", line 202, in find_contact_point result=sp.linalg.lstsq(A, contact_y_points) File "/usr/lib/python2.3/site-packages/scipy/linalg/basic.py", line 344, in lstsq overwrite_b = overwrite_b) flapack.error: failed in converting hidden `s' of flapack.dgelss to C/Fortran array In my .matplotlibrc the numerix backend is Numeric. I'm on Debian Sarge; MPL version is 0.82 ; Scipy is 0.3.2 It must be noticed that I fail to declare A=nx.ones((len.contact_x_points),2),dtype=float) as the example should seem to require, because it gives me another error: TypeError: ones() got an unexpected keyword argument 'dtype' ...so if this is the problem, please tell me how to correctly pass the dtype argument. Since I'm quite a scipy/numeric newbie I guess there could be some obvious blunder and/or more correct way of obtaining my fit, and I'd be thankful to anyone pointing me at the solution... Thanks, Massimo -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From david.huard at gmail.com Tue Jun 6 14:32:49 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 6 Jun 2006 14:32:49 -0400 Subject: [SciPy-user] linear_least_squares: OverFlow error or flapack.error In-Reply-To: <4485C765.7020804@unibo.it> References: <4485C765.7020804@unibo.it> Message-ID: <91cf711d0606061132i772549bcg4f55cb985d7fa70f@mail.gmail.com> For dytpe, try Float instead of float. David 2006/6/6, massimo sandal : > > Hi, > > I'm trying to do a simple linear least squares fit of some data in an > application. > > The relevant code runs about as follows, following closely the example > found on > http://mail.python.org/pipermail/python-list/2006-March/331693.html > > --------- > import matplotlib.numerix as nx > > contact_x_points=nx.array(x_points[left_bound:right_bound]) > contact_y_points=nx.array(y_points[left_bound:right_bound]) > > A=nx.ones((len(contact_x_points),2)) > A[:,0]=contact_x_points > result=nx.linear_algebra.linear_least_squares(A,contact_y_points) > --------- > > ...but when I run, it crashes with: > > File "hooke.py", line 202, in find_contact_point > result=nx.linear_algebra.linear_least_squares(A,contact_y_points) > File "/usr/lib/python2.3/site-packages/Numeric/LinearAlgebra.py", > line 416, in linear_least_squares > nlvl = max( 0, int( math.log( float(min( m,n ))/2. ) ) + 1 ) > OverflowError: math range error > > > I also tried using scipy: > ----------- > import scipy as sp > > contact_x_points=sp.array(x_points[left_bound:right_bound]) > contact_y_points=sp.array(y_points[left_bound:right_bound]) > > A=sp.ones((len(contact_x_points),2)) > A[:,0]=contact_x_points result=sp.linalg.lstsq > (A,contact_y_points) > ------------- > > ... with another error: > > array_from_pyobj:intent(hide) must have defined dimensions. > rank=1 dimensions=[ 0 ] > Traceback: > [...] > File "hooke.py", line 202, in find_contact_point > result=sp.linalg.lstsq(A, contact_y_points) > File "/usr/lib/python2.3/site-packages/scipy/linalg/basic.py", line > 344, in lstsq > overwrite_b = overwrite_b) > flapack.error: failed in converting hidden `s' of flapack.dgelss to > C/Fortran array > > In my .matplotlibrc the numerix backend is Numeric. > I'm on Debian Sarge; MPL version is 0.82 ; Scipy is 0.3.2 > > > It must be noticed that I fail to declare > > A=nx.ones((len.contact_x_points),2),dtype=float) > as the example should seem to require, because it gives me another error: > > TypeError: ones() got an unexpected keyword argument 'dtype' > > ...so if this is the problem, please tell me how to correctly pass the > dtype argument. > > Since I'm quite a scipy/numeric newbie I guess there could be some > obvious blunder and/or more correct way of obtaining my fit, and I'd be > thankful to anyone pointing me at the solution... > > Thanks, > Massimo > > -- > Massimo Sandal > University of Bologna > Department of Biochemistry "G.Moruzzi" > > snail mail: > Via Irnerio 48, 40126 Bologna, Italy > > email: > massimo.sandal at unibo.it > > tel: +39-051-2094388 > fax: +39-051-2094387 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sandal at unibo.it Tue Jun 6 14:35:43 2006 From: massimo.sandal at unibo.it (massimo sandal) Date: Tue, 06 Jun 2006 20:35:43 +0200 Subject: [SciPy-user] linear_least_squares: OverFlow error or flapack.error In-Reply-To: <91cf711d0606061132i772549bcg4f55cb985d7fa70f@mail.gmail.com> References: <4485C765.7020804@unibo.it> <91cf711d0606061132i772549bcg4f55cb985d7fa70f@mail.gmail.com> Message-ID: <4485CAFF.2080609@unibo.it> David Huard wrote: > For dytpe, try Float instead of float. Playing with ipython I found that the correct parameter seems typecode instead of dtype. I forced all my arrays to be float this way, but the same errors persist , both in the numerix and scipy version. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From bhendrix at enthought.com Tue Jun 6 14:43:37 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Tue, 06 Jun 2006 13:43:37 -0500 Subject: [SciPy-user] ANN: Python Enthought Edition Version 0.9.7 Released Message-ID: <4485CCD9.7050907@enthought.com> Enthought is pleased to announce the release of Python Enthought Edition Version 0.9.7 (http://code.enthought.com/enthon/) -- a python distribution for Windows. 0.9.7 Release Notes: -------------------- Version 0.9.7 of Python Enthought Edition includes an update to version 1.0.7 of the Enthought Tool Suite (ETS) Package and bug fixes-- you can look at the release notes for this ETS version here: http://svn.enthought.com/downloads/enthought/changelog-release.1.0.7.html About Python Enthought Edition: ------------------------------- Python 2.3.5, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numeric SciPy IPython Enthought Tool Suite wxPython PIL mingw f2py MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com From cookedm at physics.mcmaster.ca Tue Jun 6 14:46:29 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 6 Jun 2006 14:46:29 -0400 Subject: [SciPy-user] linear_least_squares: OverFlow error or flapack.error In-Reply-To: <4485C765.7020804@unibo.it> References: <4485C765.7020804@unibo.it> Message-ID: <20060606144629.0427f97f@arbutus.physics.mcmaster.ca> On Tue, 06 Jun 2006 20:20:21 +0200 massimo sandal wrote: > Hi, > > I'm trying to do a simple linear least squares fit of some data in an > application. > > The relevant code runs about as follows, following closely the > example found on > http://mail.python.org/pipermail/python-list/2006-March/331693.html > > --------- > import matplotlib.numerix as nx > > contact_x_points=nx.array(x_points[left_bound:right_bound]) > contact_y_points=nx.array(y_points[left_bound:right_bound]) > > A=nx.ones((len(contact_x_points),2)) > A[:,0]=contact_x_points > result=nx.linear_algebra.linear_least_squares(A,contact_y_points) > --------- > > ...but when I run, it crashes with: > > File "hooke.py", line 202, in find_contact_point > result=nx.linear_algebra.linear_least_squares(A,contact_y_points) > File "/usr/lib/python2.3/site-packages/Numeric/LinearAlgebra.py", > line 416, in linear_least_squares > nlvl = max( 0, int( math.log( float(min( m,n ))/2. ) ) + 1 ) > OverflowError: math range error You're using Numeric here, not numpy. [snip error in scipy] > In my .matplotlibrc the numerix backend is Numeric. > I'm on Debian Sarge; MPL version is 0.82 ; Scipy is 0.3.2 Any errors in those versions aren't going to be fixed (they're too old; scipy 0.3.2 depends on Numeric, not numpy, etc.). You might be happier upgrading to numpy (you'll need a newer matplotlib too). Since you're just doing linear least squares, you won't need scipy. Unfortunately, there aren't any Debian packages yet... > It must be noticed that I fail to declare > > A=nx.ones((len.contact_x_points),2),dtype=float) > as the example should seem to require, because it gives me another > error: > > TypeError: ones() got an unexpected keyword argument 'dtype' > > ...so if this is the problem, please tell me how to correctly pass > the dtype argument. Numeric uses typecode (as you found out); numpy uses dtype. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From david.grant at telus.net Wed Jun 7 02:23:27 2006 From: david.grant at telus.net (David Grant) Date: Tue, 06 Jun 2006 23:23:27 -0700 Subject: [SciPy-user] linear_least_squares: OverFlow error or flapack.error In-Reply-To: <4485C765.7020804@unibo.it> References: <4485C765.7020804@unibo.it> Message-ID: <200606062323.28622.david.grant@telus.net> Massimo, Check out the attached file. I had trouble doing linear regression back in 2003 using Numeric and scipy and ScientificPython seemed to work best for me. Things have probably changed... I ported the file to use matplotlib for the test...it was surprisingly easy to convert from scipy.gplt to matplotlib. Dave On Tuesday 06 June 2006 11:20, massimo sandal wrote: > Hi, > > I'm trying to do a simple linear least squares fit of some data in an > application. > > The relevant code runs about as follows, following closely the example > found on > http://mail.python.org/pipermail/python-list/2006-March/331693.html > > --------- > import matplotlib.numerix as nx > > contact_x_points=nx.array(x_points[left_bound:right_bound]) > contact_y_points=nx.array(y_points[left_bound:right_bound]) > > A=nx.ones((len(contact_x_points),2)) > A[:,0]=contact_x_points > result=nx.linear_algebra.linear_least_squares(A,contact_y_points) > --------- > > ...but when I run, it crashes with: > > File "hooke.py", line 202, in find_contact_point > result=nx.linear_algebra.linear_least_squares(A,contact_y_points) > File "/usr/lib/python2.3/site-packages/Numeric/LinearAlgebra.py", > line 416, in linear_least_squares > nlvl = max( 0, int( math.log( float(min( m,n ))/2. ) ) + 1 ) > OverflowError: math range error > > > I also tried using scipy: > ----------- > import scipy as sp > > contact_x_points=sp.array(x_points[left_bound:right_bound]) > contact_y_points=sp.array(y_points[left_bound:right_bound]) > > A=sp.ones((len(contact_x_points),2)) > A[:,0]=contact_x_points > result=sp.linalg.lstsq(A,contact_y_points) ------------- > > ... with another error: > > array_from_pyobj:intent(hide) must have defined dimensions. > rank=1 dimensions=[ 0 ] > Traceback: > [...] > File "hooke.py", line 202, in find_contact_point > result=sp.linalg.lstsq(A, contact_y_points) > File "/usr/lib/python2.3/site-packages/scipy/linalg/basic.py", line > 344, in lstsq > overwrite_b = overwrite_b) > flapack.error: failed in converting hidden `s' of flapack.dgelss to > C/Fortran array > > In my .matplotlibrc the numerix backend is Numeric. > I'm on Debian Sarge; MPL version is 0.82 ; Scipy is 0.3.2 > > > It must be noticed that I fail to declare > > A=nx.ones((len.contact_x_points),2),dtype=float) > as the example should seem to require, because it gives me another error: > > TypeError: ones() got an unexpected keyword argument 'dtype' > > ...so if this is the problem, please tell me how to correctly pass the > dtype argument. > > Since I'm quite a scipy/numeric newbie I guess there could be some > obvious blunder and/or more correct way of obtaining my fit, and I'd be > thankful to anyone pointing me at the solution... > > Thanks, > Massimo -------------- next part -------------- A non-text attachment was scrubbed... Name: bestFitLine.py Type: application/x-python Size: 2139 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Wed Jun 7 03:42:19 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 07 Jun 2006 09:42:19 +0200 Subject: [SciPy-user] manipulate arrays/matrices Message-ID: <4486835B.4080804@iam.uni-stuttgart.de> Hi all, Is there a better way to interchange two rows/columns of an array ? from scipy import * n=6 A = diag(arange(1,n+1))+diag(ones(n-2),2)+diag(ones(n-2),-2) # # Columns/Rows # j = 1 k = 2 print print 'A_old' print print A # # Row interchange # tmp = A[j,:].copy() A[j,:] = A[k,:] A[k,:] = tmp # # Column interchange # tmp = A[:,j].copy() A[:,j] = A[:,k] A[:,k] = tmp print print 'A_new' print print A Nils From pau.gargallo at gmail.com Wed Jun 7 05:03:15 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Wed, 7 Jun 2006 11:03:15 +0200 Subject: [SciPy-user] manipulate arrays/matrices In-Reply-To: <4486835B.4080804@iam.uni-stuttgart.de> References: <4486835B.4080804@iam.uni-stuttgart.de> Message-ID: <6ef8f3380606070203w5c835ad7xaef29ff6a0f6aa9f@mail.gmail.com> On 6/7/06, Nils Wagner wrote: > Hi all, > > Is there a better way to interchange two rows/columns of an array ? > > from scipy import * > > n=6 > > A = diag(arange(1,n+1))+diag(ones(n-2),2)+diag(ones(n-2),-2) > > # > # Columns/Rows > # > j = 1 > k = 2 > > print > print 'A_old' > print > print A > # > # Row interchange > # > tmp = A[j,:].copy() > A[j,:] = A[k,:] > A[k,:] = tmp > # > # Column interchange > # > tmp = A[:,j].copy() > A[:,j] = A[:,k] > A[:,k] = tmp > > print > print 'A_new' > print > print A > > > > Nils > your code looks very good to me. i just noticed that the 'python way': A[j], A[k] = A[k], A[j] only works with 1d arrays. Is this wanted? pau (this is probably a numpy-discussion question) From massimo.sandal at unibo.it Wed Jun 7 05:15:48 2006 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 07 Jun 2006 11:15:48 +0200 Subject: [SciPy-user] linear_least_squares: OverFlow error or flapack.error In-Reply-To: <20060606144629.0427f97f@arbutus.physics.mcmaster.ca> References: <4485C765.7020804@unibo.it> <20060606144629.0427f97f@arbutus.physics.mcmaster.ca> Message-ID: <44869944.2040208@unibo.it> > You're using Numeric here, not numpy. Sigh. I guessed they were not 100% compatible. > You might be happier upgrading to numpy (you'll need a newer > matplotlib too). Since you're just doing linear least squares, you won't > need scipy. OK. So I guess today is the day to upgrade my Sarge to (k)ubuntu 6.06 and download the latest fresh matplotlib/numpy from SF.net, as I was planning to do. Always procrastinating, but... :) m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From david.douard at logilab.fr Wed Jun 7 05:22:22 2006 From: david.douard at logilab.fr (David Douard) Date: Wed, 7 Jun 2006 11:22:22 +0200 Subject: [SciPy-user] manipulate arrays/matrices In-Reply-To: <4486835B.4080804@iam.uni-stuttgart.de> References: <4486835B.4080804@iam.uni-stuttgart.de> Message-ID: <20060607092222.GA1173@logilab.fr> On Wed, Jun 07, 2006 at 09:42:19AM +0200, Nils Wagner wrote: > Hi all, > > Is there a better way to interchange two rows/columns of an array ? > Hi, using numpy: from numpy import * A = diag(arange(1,n+1))+diag(ones(n-2),2)+diag(ones(n-2),-2) # swap rows 1 and 3 A[array([1,3])] = A[array([3,1])] #swap columns 1 and 3 A[:,array([1,3])] = A[:,array([3,1])] David -- David Douard LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian : http://www.logilab.fr/formations D?veloppement logiciel sur mesure : http://www.logilab.fr/services Informatique scientifique : http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From maik.troemel at maitro.net Wed Jun 7 05:42:08 2006 From: maik.troemel at maitro.net (=?ISO-8859-1?Q?Maik_Tr=F6mel?=) Date: Wed, 07 Jun 2006 11:42:08 +0200 Subject: [SciPy-user] toimage Message-ID: <44869F70.7040704@maitro.net> Hello, i've got some problems creating a RGB-picture with "toimage". I have createt an array with integer values betweeen 0 and 4095. I tried with: i = scipy.misc.toimage(arr, high = 0, low = 255, cmax=4095, cmin=0, pal=3, mode='P') But it didn't work. I can't find a good documentation of the command. I think the problem is "pal" and "mode". What value do I have to choose for this parameters? Thanks for your help. Greetings Maik From emsellem at obs.univ-lyon1.fr Wed Jun 7 06:18:29 2006 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Wed, 07 Jun 2006 12:18:29 +0200 Subject: [SciPy-user] slow integrals Message-ID: <4486A7F5.7070708@obs.univ-lyon1.fr> Hi! I am performing a double integration using scipy.integrate.quad and it seems rather slow in fact so I would like to know if there is any way to make it more efficient (I have a pure C version of that integral and it is much faster, maybe by a factor of more than 10!). I am doing this integral as: result = scipy.integrate.quad(IntlosMu1, -Inf, Inf,...) so an integral between - and + infinity. The Integrand (IntlosMu1) is itself an integral which I am computing using a direct Quadrature (it is an integral on an adimensional variable T which varies between 0 and 1). So I compute once and for all the abs and weight for the quadrature using [Xquad, Wquad] = real(scipy.special.orthogonal.ps_roots(Nquad)) and then I pass on Xquad and Wquad as parameters to scipy.integrate.quad, to be used in the integrand. (One last note: I am computing this double integral many many times on different points so I really need efficiency here...) So the questions I have are: - can you already see something wrong with this in terms of efficiency? (I doubt it since I don't provide much info, but just in case) - are there other integration scheme I could use to do that ? - how should I try to test things and see where the bottleneck is? thanks a lot for any input on all this! all the best Eric From Peter.Bienstman at ugent.be Wed Jun 7 07:36:31 2006 From: Peter.Bienstman at ugent.be (Peter Bienstman) Date: Wed, 7 Jun 2006 13:36:31 +0200 Subject: [SciPy-user] Simpson bug? Message-ID: <200606071336.31420.Peter.Bienstman@ugent.be> Hi, There seems to be an inconsistency in the Simpson quadrature routines (in 0.4.8). Depending on whether the number of points is odd or even, you get a scalar or an array as result: import scipy.integrate >>print scipy.integrate.simps([1,2,3]) 4.0 >>print scipy.integrate.simps([1,2,3,4]) [ 7.5] Cheers, Peter -- ------------------------------------------------ Peter Bienstman Ghent University, Dept. of Information Technology Sint-Pietersnieuwstraat 41, B-9000 Gent, Belgium tel: +32 9 264 34 46, fax: +32 9 264 35 93 WWW: http://photonics.intec.UGent.be email: Peter.Bienstman at UGent.be ------------------------------------------------ -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 309 bytes Desc: not available URL: From stefan at sun.ac.za Wed Jun 7 08:35:53 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 7 Jun 2006 14:35:53 +0200 Subject: [SciPy-user] toimage In-Reply-To: <44869F70.7040704@maitro.net> References: <44869F70.7040704@maitro.net> Message-ID: <20060607123553.GF14024@mentat.za.net> Hi Maik On Wed, Jun 07, 2006 at 11:42:08AM +0200, Maik Tr?mel wrote: > i've got some problems creating a RGB-picture with "toimage". > I have createt an array with integer values betweeen 0 and 4095. > I tried with: > > i = scipy.misc.toimage(arr, high = 0, low = 255, cmax=4095, cmin=0, pal=3, mode='P') 'pal' is the image colourmap. If you look at the toimage docstring, it says: For 2-D arrays, if pal is a valid (N,3) byte-array giving the RGB values (from 0 to 255) then mode='P', otherwise mode='L', unless mode is given as 'F' or 'I' in which case a float and/or integer array is made For example, let's generate a square image that has a linear gradient: import numpy as N im = N.empty((100,100),dtype=N.Float) im[:] = N.linspace(0,4096,100) If you have matplotlib installed, you can view this using import pylab as P P.imshow(im,cmap=P.cm.gray) To convert the array im to a PIL image, you can now do I = scipy.misc.toimage(im,high=255,low=0,cmax=4095,cmin=0) and display it using I.show() Now, if we'd like to change the colourmap, we'll have to generate one. A colourmap is of shape (N,3), i.e. the RGB values for N indeces. The following commands will generate a "cool" colourmap. map = N.empty((256,3),dtype=N.uint8) map[:,0] = N.arange(0,256,dtype=N.uint8) map[:,1] = map[:,0][::-1] map[:,2] = map[:,0] + map[:,1] We create the PIL image using I = scipy.misc.toimage(im,high=255,low=0,cmax=4095,cmin=0,pal=map,mode='P') after which we can display it using I.show() Regards St?fan From bhendrix at enthought.com Tue Jun 6 14:43:37 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Tue, 06 Jun 2006 13:43:37 -0500 Subject: [SciPy-user] ANN: Python Enthought Edition Version 0.9.7 Released Message-ID: Enthought is pleased to announce the release of Python Enthought Edition Version 0.9.7 (http://code.enthought.com/enthon/) -- a python distribution for Windows. 0.9.7 Release Notes: -------------------- Version 0.9.7 of Python Enthought Edition includes an update to version 1.0.7 of the Enthought Tool Suite (ETS) Package and bug fixes-- you can look at the release notes for this ETS version here: http://svn.enthought.com/downloads/enthought/changelog-release.1.0.7.html About Python Enthought Edition: ------------------------------- Python 2.3.5, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numeric SciPy IPython Enthought Tool Suite wxPython PIL mingw f2py MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com From maik.troemel at maitro.net Wed Jun 7 09:55:46 2006 From: maik.troemel at maitro.net (=?ISO-8859-1?Q?Maik_Tr=F6mel?=) Date: Wed, 07 Jun 2006 15:55:46 +0200 Subject: [SciPy-user] toimage In-Reply-To: <20060607123553.GF14024@mentat.za.net> References: <44869F70.7040704@maitro.net> <20060607123553.GF14024@mentat.za.net> Message-ID: <4486DAE2.2020900@maitro.net> Hi Stefan, works great! Thanks. Greetings Maik Stefan van der Walt wrote: >Hi Maik > >On Wed, Jun 07, 2006 at 11:42:08AM +0200, Maik Tr?mel wrote: > > >>i've got some problems creating a RGB-picture with "toimage". >>I have createt an array with integer values betweeen 0 and 4095. >>I tried with: >> >>i = scipy.misc.toimage(arr, high = 0, low = 255, cmax=4095, cmin=0, pal=3, mode='P') >> >> > >'pal' is the image colourmap. If you look at the toimage docstring, >it says: > > For 2-D arrays, if pal is a valid (N,3) byte-array giving the RGB values > (from 0 to 255) then mode='P', otherwise mode='L', unless mode is given > as 'F' or 'I' in which case a float and/or integer array is made > >For example, let's generate a square image that has a linear gradient: > >import numpy as N >im = N.empty((100,100),dtype=N.Float) >im[:] = N.linspace(0,4096,100) > >If you have matplotlib installed, you can view this using > >import pylab as P >P.imshow(im,cmap=P.cm.gray) > >To convert the array im to a PIL image, you can now do > >I = scipy.misc.toimage(im,high=255,low=0,cmax=4095,cmin=0) > >and display it using > >I.show() > >Now, if we'd like to change the colourmap, we'll have to generate one. >A colourmap is of shape (N,3), i.e. the RGB values for N indeces. The >following commands will generate a "cool" colourmap. > >map = N.empty((256,3),dtype=N.uint8) >map[:,0] = N.arange(0,256,dtype=N.uint8) >map[:,1] = map[:,0][::-1] >map[:,2] = map[:,0] + map[:,1] > >We create the PIL image using > >I = scipy.misc.toimage(im,high=255,low=0,cmax=4095,cmin=0,pal=map,mode='P') > >after which we can display it using > >I.show() > >Regards >St?fan > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > > From vedranf at riteh.hr Wed Jun 7 14:48:00 2006 From: vedranf at riteh.hr (=?ISO-8859-2?Q?Vedran_Fura=E8?=) Date: Wed, 07 Jun 2006 20:48:00 +0200 Subject: [SciPy-user] fsolve() crashes python on windows Message-ID: <44871F60.7030805@riteh.hr> When I call optimize.fsolve(...) python interpreter crashes immediately, no error messages, nothing, just brings me back to c:\ On linux the same code works fine. I tried it on different computers. It seems that it doesn't crash on pentium4, only on athlon and pentium2. python 2.4.3 (activestate), scipy 0.4.8/0.4.9 (binaries from sourceforge) with numpy 0.9.6/0.9.8, windows xp Code: >>> from scipy.optimize import * >>> def foo(x): ... whatever >>> fsolve(foo, whatever) c:\ Regards, Vedran Fura? From webb.sprague at gmail.com Wed Jun 7 15:03:16 2006 From: webb.sprague at gmail.com (Webb Sprague) Date: Wed, 7 Jun 2006 12:03:16 -0700 Subject: [SciPy-user] inverse to scipy.stats.zprob? Message-ID: Hi all, Could someone tell me what the inverse function to ST.zprob is? Here is what I know how to do: In [76]: scipy.stats.zprob(1.96) Out[76]: 0.97500210485177952 In [79]: scipy.stats.zprob(0) Out[79]: 0.5 etc Here is what I want to do: In [77]:scipy.stats.SOMEFUNCTION(.5) Out[77]: 0.0 In [78]:scipy.stats.SOMEFUNCTION(.95) Oout[78]: 1.96 Thoughts? W From robert.kern at gmail.com Wed Jun 7 15:10:12 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 07 Jun 2006 14:10:12 -0500 Subject: [SciPy-user] inverse to scipy.stats.zprob? In-Reply-To: References: Message-ID: <44872494.1030002@gmail.com> Webb Sprague wrote: > Hi all, > > Could someone tell me what the inverse function to ST.zprob is? > > Here is what I know how to do: > > In [76]: scipy.stats.zprob(1.96) > Out[76]: 0.97500210485177952 > In [79]: scipy.stats.zprob(0) > Out[79]: 0.5 > etc > > Here is what I want to do: > In [77]:scipy.stats.SOMEFUNCTION(.5) > Out[77]: 0.0 > In [78]:scipy.stats.SOMEFUNCTION(.95) > Oout[78]: 1.96 In [1]: from scipy import stats In [2]: stats.norm.ppf(0.5) Out[2]: array(0.0) In [3]: stats.norm.ppf(0.95) Out[3]: array(1.6448536269514722) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Wed Jun 7 16:40:03 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 7 Jun 2006 16:40:03 -0400 Subject: [SciPy-user] slow integrals In-Reply-To: <4486A7F5.7070708@obs.univ-lyon1.fr> References: <4486A7F5.7070708@obs.univ-lyon1.fr> Message-ID: <20060607204003.GA23340@arbutus.physics.mcmaster.ca> On Wed, Jun 07, 2006 at 12:18:29PM +0200, Eric Emsellem wrote: > Hi! > > I am performing a double integration using scipy.integrate.quad and it > seems rather slow in fact so I would like to know if there is any way to > make it more efficient (I have a pure C version of that integral and it > is much faster, maybe by a factor of more than 10!). What do you use for your quadrature routine in your C version? quad() uses QUADPACK, and calls your function once for each point, so it doesn't take advantage of speedup from using arrays. > I am doing this integral as: > > result = scipy.integrate.quad(IntlosMu1, -Inf, Inf,...) > > so an integral between - and + infinity. That's always fun. Note that quadrature() will use a Fourier integral for infinite limits. I can suggest the usual tricks: - use a weighting function. The quad_explain() function has the info on what's usuable - map to a finite interval using some transform (arctan, say) > The Integrand (IntlosMu1) is itself an integral which I am computing > using a direct Quadrature (it is an integral on an adimensional variable > T which varies between 0 and 1). > So I compute once and for all the abs and weight for the quadrature using > > [Xquad, Wquad] = real(scipy.special.orthogonal.ps_roots(Nquad)) > > and then I pass on Xquad and Wquad as parameters to > scipy.integrate.quad, to be used in the integrand. You might want to try dblquad; it does essentially what you're doing above, but does the inner integral adaptively (AFAIK), so it may use fewer points. > (One last note: I am computing this double integral many many times on > different points so I really need efficiency here...) > > So the questions I have are: > - can you already see something wrong with this in terms of efficiency? > (I doubt it since I don't provide much info, but just in case) > - are there other integration scheme I could use to do that ? I would suggest integrate.quadrature, which does a simple Gaussian integration scheme (you'll need to do it on a finite interval). Looking at quadrature(), though, it looks like it wraps the passed function in an inefficient value-at-a-time wrapper (vec_func). Try removing that wrapper from the source, so it will call your function with an array of points to evaluate. Or raise a TypeError in your function if passed a scalar. > - how should I try to test things and see where the bottleneck is? Profiling, I guess. It may not pick up the callbacks to your function from QUADPACK, though. Also look at the infodict returned by quad(); quad_explain() describes the stuff in there. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From arauzo at decsai.ugr.es Wed Jun 7 18:46:38 2006 From: arauzo at decsai.ugr.es (Antonio Arauzo Azofra) Date: Thu, 08 Jun 2006 00:46:38 +0200 Subject: [SciPy-user] Bug on 0.4.9 ? Message-ID: <4487574E.1040006@decsai.ugr.es> The following code works fine in Scipy 0.4.8, giving as covariance matrix result: 20.312516964192504 (i think the result should be a matrix but this is not the case now. i supposse there will be some reason to be a number). Anyway, it is very strange that scipy.cov instead of returning a 1x1 matrix or a number it returns a recurrent matrix full of NaNs :-? Can anybody tell me if this is a bug or a problem on my instalation? >>> import scipy >>> scipy.__version__ipy '0.4.9' >>> t=[[85.0], [85.0], [86.0], [91.0], [87.0], [98.0], [88.0], [88.0], [92.0], [90.0], [89.0], [82.0], [90.0], [86.0], [96.0], [91.0], [89.0], [89.0], [91.0], [94.0], [92.0], [93.0], [90.0], [92.0], [90.0], [88.0], [87.0], [86.0], [91.0], [93.0], [88.0], [94.0], [91.0], [85.0], [79.0], [85.0], [89.0], [84.0], [89.0], [89.0], [86.0], [85.0], [88.0], [92.0], [91.0], [83.0], [85.0], [92.0], [94.0], [87.0], [84.0], [96.0], [90.0], [90.0], [90.0], [91.0], [87.0], [89.0], [85.0], [103.0], [90.0], [90.0], [90.0], [87.0], [90.0], [86.0], [90.0], [87.0], [96.0], [91.0], [95.0], [92.0], [89.0], [94.0], [92.0], [94.0], [88.0], [92.0], [92.0], [84.0], [88.0], [86.0], [99.0], [88.0], [89.0], [90.0], [81.0], [89.0], [92.0], [85.0], [92.0], [89.0], [90.0], [91.0], [91.0], [91.0], [88.0], [87.0], [87.0], [87.0], [88.0], [90.0], [86.0], [92.0], [85.0], [89.0], [91.0], [96.0], [79.0], [90.0], [89.0], [88.0], [92.0], [91.0], [83.0], [90.0], [92.0], [93.0], [86.0], [97.0], [87.0], [86.0], [87.0], [92.0], [90.0], [99.0], [92.0], [95.0], [92.0], [95.0], [90.0], [96.0], [95.0], [92.0], [91.0], [90.0], [88.0], [100.0], [98.0], [91.0], [92.0], [93.0], [90.0], [97.0], [93.0], [90.0], [92.0], [88.0], [89.0], [92.0], [92.0], [93.0], [97.0], [84.0], [90.0], [92.0], [97.0], [91.0], [93.0], [92.0], [90.0], [91.0], [92.0], [92.0], [86.0], [98.0], [92.0], [97.0], [93.0], [94.0], [87.0], [88.0], [84.0], [94.0], [97.0], [92.0], [82.0], [88.0], [95.0], [88.0], [91.0], [83.0], [91.0], [86.0], [91.0], [90.0], [90.0], [89.0], [85.0], [85.0], [78.0], [88.0], [92.0], [91.0], [94.0], [88.0], [88.0], [90.0], [87.0], [65.0], [90.0], [85.0], [88.0], [86.0], [82.0], [86.0], [94.0], [87.0], [83.0], [93.0], [101.0], [92.0], [92.0], [86.0], [85.0], [86.0], [86.0], [81.0], [91.0], [91.0], [92.0], [91.0], [93.0], [87.0], [83.0], [95.0], [93.0], [84.0], [87.0], [86.0], [88.0], [90.0], [88.0], [93.0], [98.0], [87.0], [94.0], [88.0], [89.0], [87.0], [93.0], [88.0], [94.0], [91.0], [90.0], [91.0], [88.0], [82.0], [85.0], [91.0], [98.0], [86.0], [89.0], [82.0], [83.0], [96.0], [94.0], [93.0], [93.0], [91.0], [90.0], [87.0], [91.0], [86.0], [91.0], [88.0], [85.0], [89.0], [95.0], [94.0], [96.0], [90.0], [94.0], [99.0], [94.0], [92.0], [87.0], [92.0], [98.0], [92.0], [97.0], [93.0], [95.0], [99.0], [98.0], [92.0], [96.0], [95.0], [86.0], [102.0], [85.0], [91.0], [91.0], [93.0], [98.0], [82.0], [95.0], [97.0], [100.0], [88.0], [91.0], [92.0], [86.0], [91.0], [87.0], [87.0], [99.0], [96.0], [98.0], [91.0]] >>> a=scipy.array(t) >>> scipy.cov(a) array([[ nan, nan, nan, ..., nan, nan, nan], [ nan, nan, nan, ..., nan, nan, nan], [ nan, nan, nan, ..., nan, nan, nan], ..., [ nan, nan, nan, ..., nan, nan, nan], [ nan, nan, nan, ..., nan, nan, nan], [ nan, nan, nan, ..., nan, nan, nan]]) -- Regards, Antonio Arauzo Azofra From cookedm at physics.mcmaster.ca Wed Jun 7 21:47:54 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 7 Jun 2006 21:47:54 -0400 Subject: [SciPy-user] Simpson bug? In-Reply-To: <200606071336.31420.Peter.Bienstman@ugent.be> References: <200606071336.31420.Peter.Bienstman@ugent.be> Message-ID: <20060608014754.GA23897@arbutus.physics.mcmaster.ca> On Wed, Jun 07, 2006 at 01:36:31PM +0200, Peter Bienstman wrote: > Hi, > > There seems to be an inconsistency in the Simpson quadrature routines (in > 0.4.8). Depending on whether the number of points is odd or even, you get a > scalar or an array as result: > > import scipy.integrate > > >>print scipy.integrate.simps([1,2,3]) > > 4.0 > > >>print scipy.integrate.simps([1,2,3,4]) > > [ 7.5] I've fixed this in svn. The simple routines like simps in scipy.integrate were indexing arrays using a list of slices, instead of a tuple. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From scipy at mspacek.mm.st Thu Jun 8 00:20:22 2006 From: scipy at mspacek.mm.st (Martin Spacek) Date: Wed, 07 Jun 2006 21:20:22 -0700 Subject: [SciPy-user] "Approximately equal to" operator? Message-ID: <4487A586.2000707@mspacek.mm.st> Hello, I'm comparing two numpy 1d arrays of the same length. One has ints, the other floats. From what I can tell, using == isn't very reliable when mixing ints and floats. Sometimes 1.0 == 1, and sometimes it doesn't. For example: [mspacek]|71> arange(10, 13) <71> array([10, 11, 12]) [mspacek]|72> arange(10, 13, 0.1)[[0, 10, 20]] <72> array([ 10., 11., 12.]) [mspacek]|73> arange(10, 13, 0.1)[[0, 10, 20]] == arange(10,13) <73> array([True, False, False], dtype=bool) In this case, 10 == 10.0, but 11 != 11.0 and 12 != 12.0 I guess comparing ints to floats is a touchy thing, and depends on the system's C libraries. Maybe minuscule errors accumulate when building up a float array using arange(). Anyways, rounding and then converting the float array into an int array doesn't seem very Pythonic. I've noticed there's a numpy function allclose(), but this doesn't return a boolean array, just a single True if all the elements are equal. Is there a "~=" or "approximately equal to" operator that does an elementwise comparison of two arrays, and given a certain tolerance, returns a boolean array? Cheers, Martin From vincefn at users.sourceforge.net Thu Jun 8 02:16:22 2006 From: vincefn at users.sourceforge.net (Vincent Favre-Nicolin) Date: Thu, 8 Jun 2006 08:16:22 +0200 Subject: [SciPy-user] "Approximately equal to" operator? In-Reply-To: <4487A586.2000707@mspacek.mm.st> References: <4487A586.2000707@mspacek.mm.st> Message-ID: <200606080816.23013.vincefn@users.sourceforge.net> On Thursday 08 June 2006 06:20, Martin Spacek wrote: > Is there a "~=" or "approximately equal to" operator that does an > elementwise comparison of two arrays, and given a certain tolerance, > returns a boolean array? You mean something like (to compare a and b): abs(a-b)<1e-6 Vincent -- Vincent Favre-Nicolin Universit? Joseph Fourier http://v.favrenicolin.free.fr ObjCryst & Fox : http://objcryst.sourceforge.net From robert.kern at gmail.com Thu Jun 8 02:22:22 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 08 Jun 2006 01:22:22 -0500 Subject: [SciPy-user] "Approximately equal to" operator? In-Reply-To: <4487A586.2000707@mspacek.mm.st> References: <4487A586.2000707@mspacek.mm.st> Message-ID: <4487C21E.2080507@gmail.com> Martin Spacek wrote: > I've noticed > there's a numpy function allclose(), but this doesn't return a boolean > array, just a single True if all the elements are equal. Well, if you take a look at the source of allclose, the key part is this: less(absolute(x-y), atol + rtol * absolute(y)) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From scipy at mspacek.mm.st Thu Jun 8 02:31:49 2006 From: scipy at mspacek.mm.st (Martin Spacek) Date: Wed, 07 Jun 2006 23:31:49 -0700 Subject: [SciPy-user] "Approximately equal to" operator? In-Reply-To: <200606080816.23013.vincefn@users.sourceforge.net> References: <4487A586.2000707@mspacek.mm.st> <200606080816.23013.vincefn@users.sourceforge.net> Message-ID: <4487C455.6050700@mspacek.mm.st> Yup, that would do it. Should've thought of that, duh. Thanks, Martin Vincent Favre-Nicolin wrote: > On Thursday 08 June 2006 06:20, Martin Spacek wrote: >> Is there a "~=" or "approximately equal to" operator that does an >> elementwise comparison of two arrays, and given a certain tolerance, >> returns a boolean array? > > You mean something like (to compare a and b): > > abs(a-b)<1e-6 > > Vincent Robert Kern wrote: > Well, if you take a look at the source of allclose, the key part is this: > > less(absolute(x-y), atol + rtol * absolute(y)) > From david.grant at telus.net Thu Jun 8 02:38:59 2006 From: david.grant at telus.net (David Grant) Date: Wed, 07 Jun 2006 23:38:59 -0700 Subject: [SciPy-user] manipulate arrays/matrices In-Reply-To: <20060607092222.GA1173@logilab.fr> References: <4486835B.4080804@iam.uni-stuttgart.de> <20060607092222.GA1173@logilab.fr> Message-ID: <200606072339.00798.david.grant@telus.net> On Wednesday 07 June 2006 02:22, David Douard wrote: > On Wed, Jun 07, 2006 at 09:42:19AM +0200, Nils Wagner wrote: > > Hi all, > > > > Is there a better way to interchange two rows/columns of an array ? > > Hi, using numpy: > > from numpy import * > > A = diag(arange(1,n+1))+diag(ones(n-2),2)+diag(ones(n-2),-2) > > # swap rows 1 and 3 > A[array([1,3])] = A[array([3,1])] > > #swap columns 1 and 3 > A[:,array([1,3])] = A[:,array([3,1])] Even more compact would be to just use a list for the indices: # swap rows 1 and 3 A[[1,3]] = A[[3,1]] #swap columns 1 and 3 A[:,[1,3]] = A[:,[3,1]] Or is there a reason to cast to an array for the indices? David From david.douard at logilab.fr Thu Jun 8 02:51:49 2006 From: david.douard at logilab.fr (David Douard) Date: Thu, 8 Jun 2006 08:51:49 +0200 Subject: [SciPy-user] manipulate arrays/matrices In-Reply-To: <200606072339.00798.david.grant@telus.net> References: <4486835B.4080804@iam.uni-stuttgart.de> <20060607092222.GA1173@logilab.fr> <200606072339.00798.david.grant@telus.net> Message-ID: <20060608065149.GA1135@logilab.fr> On Wed, Jun 07, 2006 at 11:38:59PM -0700, David Grant wrote: > On Wednesday 07 June 2006 02:22, David Douard wrote: > > On Wed, Jun 07, 2006 at 09:42:19AM +0200, Nils Wagner wrote: > > > Hi all, > > > > > > Is there a better way to interchange two rows/columns of an array ? > > > > Hi, using numpy: > > > > from numpy import * > > > > A = diag(arange(1,n+1))+diag(ones(n-2),2)+diag(ones(n-2),-2) > > > > # swap rows 1 and 3 > > A[array([1,3])] = A[array([3,1])] > > > > #swap columns 1 and 3 > > A[:,array([1,3])] = A[:,array([3,1])] > > Even more compact would be to just use a list for the indices: > # swap rows 1 and 3 > A[[1,3]] = A[[3,1]] > #swap columns 1 and 3 > A[:,[1,3]] = A[:,[3,1]] > > Or is there a reason to cast to an array for the indices? Nope. You're absolutely right. Don't know why I made the explicit casts. David (too) -- David Douard LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian : http://www.logilab.fr/formations D?veloppement logiciel sur mesure : http://www.logilab.fr/services Informatique scientifique : http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From Sebastien.DEMENTEN at alumni.insead.edu Thu Jun 8 03:42:10 2006 From: Sebastien.DEMENTEN at alumni.insead.edu (DE MENTEN Sebastien) Date: Thu, 8 Jun 2006 09:42:10 +0200 Subject: [SciPy-user] Bug on 0.4.9 ? Message-ID: <533576C7D0D0F64EBD47E8E11922B955098FBDAA@GAIA.FBL.insead.intra> In scipy 0.4.9, you get your results if you compute cov(transpose(t)) instead of cov(t). However, I do not know which one should be the standard. > -----Original Message----- > From: scipy-user-bounces at scipy.net [mailto:scipy-user-bounces at scipy.net] > On Behalf Of Antonio Arauzo Azofra > Sent: Thursday, June 08, 2006 00:47 > To: scipy-user at scipy.net > Cc: Jose Manuel Benitez Sanchez > Subject: [SciPy-user] Bug on 0.4.9 ? > > The following code works fine in Scipy 0.4.8, giving as covariance > matrix result: 20.312516964192504 (i think the result should be a matrix > but this is not the case now. i supposse there will be some reason to be > a number). > > Anyway, it is very strange that scipy.cov instead of returning a 1x1 > matrix or a number it returns a recurrent matrix full of NaNs :-? > > Can anybody tell me if this is a bug or a problem on my instalation? > > >>> import scipy > >>> scipy.__version__ipy > '0.4.9' > >>> t=[[85.0], [85.0], [86.0], [91.0], [87.0], [98.0], [88.0], [88.0], > [92.0], [90.0], [89.0], [82.0], [90.0], [86.0], [96.0], [91.0], [89.0], > [89.0], [91.0], [94.0], [92.0], [93.0], [90.0], [92.0], [90.0], [88.0], > [87.0], [86.0], [91.0], [93.0], [88.0], [94.0], [91.0], [85.0], [79.0], > [85.0], [89.0], [84.0], [89.0], [89.0], [86.0], [85.0], [88.0], [92.0], > [91.0], [83.0], [85.0], [92.0], [94.0], [87.0], [84.0], [96.0], [90.0], > [90.0], [90.0], [91.0], [87.0], [89.0], [85.0], [103.0], [90.0], [90.0], > [90.0], [87.0], [90.0], [86.0], [90.0], [87.0], [96.0], [91.0], [95.0], > [92.0], [89.0], [94.0], [92.0], [94.0], [88.0], [92.0], [92.0], [84.0], > [88.0], [86.0], [99.0], [88.0], [89.0], [90.0], [81.0], [89.0], [92.0], > [85.0], [92.0], [89.0], [90.0], [91.0], [91.0], [91.0], [88.0], [87.0], > [87.0], [87.0], [88.0], [90.0], [86.0], [92.0], [85.0], [89.0], [91.0], > [96.0], [79.0], [90.0], [89.0], [88.0], [92.0], [91.0], [83.0], [90.0], > [92.0], [93.0], [86.0], [97.0], [87.0], [86.0], [87.0], [92.0], [90.0], > [99.0], [92.0], [95.0], [92.0], [95.0], [90.0], [96.0], [95.0], [92.0], > [91.0], [90.0], [88.0], [100.0], [98.0], [91.0], [92.0], [93.0], [90.0], > [97.0], [93.0], [90.0], [92.0], [88.0], [89.0], [92.0], [92.0], [93.0], > [97.0], [84.0], [90.0], [92.0], [97.0], [91.0], [93.0], [92.0], [90.0], > [91.0], [92.0], [92.0], [86.0], [98.0], [92.0], [97.0], [93.0], [94.0], > [87.0], [88.0], [84.0], [94.0], [97.0], [92.0], [82.0], [88.0], [95.0], > [88.0], [91.0], [83.0], [91.0], [86.0], [91.0], [90.0], [90.0], [89.0], > [85.0], [85.0], [78.0], [88.0], [92.0], [91.0], [94.0], [88.0], [88.0], > [90.0], [87.0], [65.0], [90.0], [85.0], [88.0], [86.0], [82.0], [86.0], > [94.0], [87.0], [83.0], [93.0], [101.0], [92.0], [92.0], [86.0], [85.0], > [86.0], [86.0], [81.0], [91.0], [91.0], [92.0], [91.0], [93.0], [87.0], > [83.0], [95.0], [93.0], [84.0], [87.0], [86.0], [88.0], [90.0], [88.0], > [93.0], [98.0], [87.0], [94.0], [88.0], [89.0], [87.0], [93.0], [88.0], > [94.0], [91.0], [90.0], [91.0], [88.0], [82.0], [85.0], [91.0], [98.0], > [86.0], [89.0], [82.0], [83.0], [96.0], [94.0], [93.0], [93.0], [91.0], > [90.0], [87.0], [91.0], [86.0], [91.0], [88.0], [85.0], [89.0], [95.0], > [94.0], [96.0], [90.0], [94.0], [99.0], [94.0], [92.0], [87.0], [92.0], > [98.0], [92.0], [97.0], [93.0], [95.0], [99.0], [98.0], [92.0], [96.0], > [95.0], [86.0], [102.0], [85.0], [91.0], [91.0], [93.0], [98.0], [82.0], > [95.0], [97.0], [100.0], [88.0], [91.0], [92.0], [86.0], [91.0], [87.0], > [87.0], [99.0], [96.0], [98.0], [91.0]] > >>> a=scipy.array(t) > >>> scipy.cov(a) > array([[ nan, nan, > nan, ..., nan, > nan, nan], > [ nan, nan, > nan, ..., nan, > nan, nan], > [ nan, nan, > nan, ..., nan, > nan, nan], > ..., > [ nan, nan, > nan, ..., nan, > nan, nan], > [ nan, nan, > nan, ..., nan, > nan, nan], > [ nan, nan, > nan, ..., nan, > nan, nan]]) > > -- > Regards, > Antonio Arauzo Azofra > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From alexandre.guimond at mirada-solutions.com Thu Jun 8 03:55:19 2006 From: alexandre.guimond at mirada-solutions.com (Alexandre Guimond) Date: Thu, 8 Jun 2006 08:55:19 +0100 Subject: [SciPy-user] bug fixes + changes to scipy.optimize Message-ID: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B198C0@oxfh5f1a> Hello. I've fixed a small bug in scipy.optimize.brent where the maximum number of iterations was not taken into account. I've also made changes so that: 1) it is possible to specify the initial direction set used in powell 2) set bracket parameters from higher level calls such as powell. For example, it allows to do: scipy.optimize.fmin_powell( lambda x: - sm.Ll( x ), bracket_keywords = { 'grow_limit': 10 } ) Note that changes in 2) are incomplete in the sense that similar changes could also be done for brent parameters as well as to golden, etc. in order to keep a consistence interface. I could finish this if the patch was to be included in scipy. Let me know. The patch is attached. This is against scipy 0.4.9 best alex NOTICE: This e-mail message and all attachments transmitted with it may contain legally privileged and confidential information intended solely for the use of the addressee. If the reader of this message is not the intended recipient, you are hereby notified that any reading, dissemination, distribution, copying, or other use of this message or its attachments, hyperlinks, or any other files of any kind is strictly prohibited. If you have received this message in error, please notify the sender immediately by telephone (+44-1865-265500) or by a reply to this electronic mail message and delete this message and all copies and backups thereof. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: optimize.py Type: application/octet-stream Size: 55028 bytes Desc: optimize.py URL: From emsellem at obs.univ-lyon1.fr Thu Jun 8 03:55:33 2006 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Thu, 08 Jun 2006 09:55:33 +0200 Subject: [SciPy-user] slow integrals In-Reply-To: <20060607204003.GA23340@arbutus.physics.mcmaster.ca> References: <4486A7F5.7070708@obs.univ-lyon1.fr> <20060607204003.GA23340@arbutus.physics.mcmaster.ca> Message-ID: <4487D7F5.2030700@obs.univ-lyon1.fr> > What do you use for your quadrature routine in your C version? > I am using NAG routines. d01amf for the +/- infinity one and a direct Gaussian quadrature for the inner integral. > Looking at quadrature(), though, it looks like it wraps the passed function > in an inefficient value-at-a-time wrapper (vec_func). Try removing that > wrapper from the source, so it will call your function with an array of > points to evaluate. Or raise a TypeError in your function if passed a scalar. > > not sure I see what you mean here. Do you suggest I modify the scipy routines themselves? I will of course have a more detailed look at quad_explain thanks Eric From alexandre.guimond at mirada-solutions.com Thu Jun 8 04:04:26 2006 From: alexandre.guimond at mirada-solutions.com (Alexandre Guimond) Date: Thu, 8 Jun 2006 09:04:26 +0100 Subject: [SciPy-user] cluster package removed / changed? Message-ID: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B198C2@oxfh5f1a> Hello. It seems there used to be a cluster package in scipy which contained a kmeans() function. I'm looking for that functionality, but I can not find the cluster package in the most recent scipy (0.4.9), nor can I find the kmeans function. Has it be removed? Thx for any help alex NOTICE: This e-mail message and all attachments transmitted with it may contain legally privileged and confidential information intended solely for the use of the addressee. If the reader of this message is not the intended recipient, you are hereby notified that any reading, dissemination, distribution, copying, or other use of this message or its attachments, hyperlinks, or any other files of any kind is strictly prohibited. If you have received this message in error, please notify the sender immediately by telephone (+44-1865-265500) or by a reply to this electronic mail message and delete this message and all copies and backups thereof. -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivetti at itc.it Thu Jun 8 04:14:44 2006 From: olivetti at itc.it (Emanuele Olivetti) Date: Thu, 08 Jun 2006 10:14:44 +0200 Subject: [SciPy-user] cluster package removed / changed? In-Reply-To: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B198C2@oxfh5f1a> References: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B198C2@oxfh5f1a> Message-ID: <4487DC74.4040701@itc.it> You could use instead pycluster library: http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/cluster/software.htm#pycluster Definitely a very good and fast library, easy to use together with Numeric/numpy. Cheers, Emanuele Alexandre Guimond wrote: > Hello. It seems there used to be a cluster package in scipy which > contained a kmeans() function. I'm looking for that functionality, but I > can not find the cluster package in the most recent scipy (0.4.9), nor > can I find the kmeans function. Has it be removed? From a.u.r.e.l.i.a.n at gmx.net Thu Jun 8 05:34:01 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Thu, 8 Jun 2006 11:34:01 +0200 Subject: [SciPy-user] Zeta function of complex argument Message-ID: <200606081134.01169.a.u.r.e.l.i.a.n@gmx.net> I was asked to forward this, since Waltraut has problems with sending mail to the lists. But she is able to read your answers. Johannes ------------------- Hi, can anybody tell me, why the Rieman zeta function does not eat complex variables? For example scipy.special.zeta(2,2.0+1j*1.0) yields the error message 'function not supported for these types, and can't coerce safely to supported types'. Thanks, Waltraut From josegomez at gmx.net Thu Jun 8 05:59:50 2006 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Thu, 08 Jun 2006 11:59:50 +0200 Subject: [SciPy-user] Scipy and Py2exe Message-ID: <20060608095950.97580@gmx.net> Hi! I have a WX application that uses scipy. I want to compile it into an exe file using py2exe. I am using scipy '0.3.3_303.4602'and py2exe version 0.6.5. I wrote my setup.py file, and got the error that the cephes library could not be found. This is mentioned in the py2exe wiki . I tried the workaround, but it doesn't help, as the executable cannot find the cephes.pyd file. After copying the file from the scipy library directory, the cephes library is found, but I get a new error: Traceback (most recent call last): File "PinchoGUI.py", line 11, in ? File "CalcularLai.pyc", line 1, in ? File "scipy\__init__.pyc", line 118, in ? File "scipy\__init__.pyc", line 37, in _pkg_titles ValueError: min() or max() arg is an empty sequence I also noticed that py2exe adds most of the scipy packages (linalg, fft...), which I am not using. Can I exclude some of these modules being added, and if so, how? I am only starting with py2exe, and will be happy to provide any more details. Many thanks for your help. Jose -- "Feel free" ? 10 GB Mailbox, 100 FreeSMS/Monat ... Jetzt GMX TopMail testen: http://www.gmx.net/de/go/topmail From arauzo at decsai.ugr.es Thu Jun 8 09:13:35 2006 From: arauzo at decsai.ugr.es (Antonio Arauzo Azofra) Date: Thu, 08 Jun 2006 15:13:35 +0200 Subject: [SciPy-user] Bug on 0.4.9 ? In-Reply-To: <533576C7D0D0F64EBD47E8E11922B955098FBDAA@GAIA.FBL.insead.intra> References: <533576C7D0D0F64EBD47E8E11922B955098FBDAA@GAIA.FBL.insead.intra> Message-ID: <4488227F.8030600@decsai.ugr.es> DE MENTEN Sebastien wrote: > In scipy 0.4.9, you get your results if you compute cov(transpose(t)) > instead of cov(t). Thanks Sebastien, you are right. It seems semantic of covariance funcion (scipy.cov) has changed. BEAWARE this may make your programs give wrong results if updating from 0.4.8 to 0.4.9. The solution is using cov(t, rowvar=False). This ensures code will work the same way in both versions. > However, I do not know which one should be the standard. Probably using rows as vars is good default, as it is the same used in R, but i think, this should not have been changed in minor version. This change is not present in 0.4.9 tag at track[1]. :-? I don't understand what happend. Have donwloaded anything wrong? [1] http://projects.scipy.org/scipy/scipy/browser/tags/0.4.9/Lib/stats/stats.py Another doubt, in help(scipy.cov) it says: "Help on function cov in module numpy.lib.function_base: ..." Where is it really? in scipy.stats? or in numpy? By the way, the function comment says that when a matrix is passed as argument it returns covariance matrix, but if matrix is Nx1 (or 1xN) it returns a number instead of a 1x1 matrix. I think this is not homogeneous and it is not what expected. -- Saludos, Antonio Arauzo Azofra From massimo.sandal at unibo.it Thu Jun 8 11:08:02 2006 From: massimo.sandal at unibo.it (massimo sandal) Date: Thu, 08 Jun 2006 17:08:02 +0200 Subject: [SciPy-user] linear_least_squares: OverFlow error In-Reply-To: <20060606144629.0427f97f@arbutus.physics.mcmaster.ca> References: <4485C765.7020804@unibo.it> <20060606144629.0427f97f@arbutus.physics.mcmaster.ca> Message-ID: <44883D52.4050005@unibo.it> Hi, I upgraded today to latest numpy (0.9.8) and matplotlib (0.87.3). I still have the same error (upgraded code in snippet): >> --------- >> import matplotlib.numerix as nx >> >> contact_x_points=nx.array(x_points[left_bound:right_bound]) >> contact_y_points=nx.array(y_points[left_bound:right_bound]) >> >> A=nx.ones((len(contact_x_points),2)) >> A[:,0]=contact_x_points >> result=nx.linalg.lstsq(A,contact_y_points) >> --------- >> >> ...but when I run, it crashes with: >> >> result=nx.linalg.lstsq(A, contact_y_points) >> File "/usr/lib/python2.4/site-packages/numpy/linalg/linalg.py", line >> 457, in lstsq >> nlvl = max( 0, int( math.log( float(min( m,n ))/2. ) ) + 1 ) >> OverflowError: math range error any hint? m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From cdavis at staffmail.ed.ac.uk Thu Jun 8 11:12:57 2006 From: cdavis at staffmail.ed.ac.uk (Cory Davis) Date: Thu, 08 Jun 2006 16:12:57 +0100 Subject: [SciPy-user] future-safe saving of numpy arrays? Message-ID: <44883E79.405@staffmail.ed.ac.uk> Hi All, I have had some trouble with my data and changes to numpy over time. I often want to save both single arrays and complicated objects with arrays as data members. Until now I have almost always used cPickle. But this can cause problems when I upgrade numpy/scipy, when I can no longer unpickle data saved using older versions. Does anyone have any suggestions on avoiding this problem? Cheers, Cory. -------------- next part -------------- A non-text attachment was scrubbed... Name: cdavis.vcf Type: text/x-vcard Size: 431 bytes Desc: not available URL: From massimo.sandal at unibo.it Thu Jun 8 11:17:35 2006 From: massimo.sandal at unibo.it (massimo sandal) Date: Thu, 08 Jun 2006 17:17:35 +0200 Subject: [SciPy-user] linear_least_squares: OverFlow error or flapack.error In-Reply-To: <4485C765.7020804@unibo.it> References: <4485C765.7020804@unibo.it> Message-ID: <44883F8F.3050809@unibo.it> Ok, I guess I found the problem is in my code (the arrays I'm passing are indeed empty for some reason, not surprising it can't line fit them). So high shame on me for having polluted the mailing list :( A lesson in humility (and debugging) indeed. Thanks anyway... I flee in shame. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From robert.kern at gmail.com Thu Jun 8 12:17:53 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 08 Jun 2006 11:17:53 -0500 Subject: [SciPy-user] cluster package removed / changed? In-Reply-To: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B198C2@oxfh5f1a> References: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B198C2@oxfh5f1a> Message-ID: <44884DB1.3030208@gmail.com> Alexandre Guimond wrote: > Hello. It seems there used to be a cluster package in scipy which > contained a kmeans() function. I?m looking for that functionality, but I > can not find the cluster package in the most recent scipy (0.4.9), nor > can I find the kmeans function. Has it be removed? No, it's still there. Look in Lib/cluster/vq.py for kmeans. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From alexandre.guimond at mirada-solutions.com Thu Jun 8 12:33:08 2006 From: alexandre.guimond at mirada-solutions.com (Alexandre Guimond) Date: Thu, 8 Jun 2006 17:33:08 +0100 Subject: [SciPy-user] cluster package removed / changed? Message-ID: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B19987@oxfh5f1a> Hum... not there on my computer... could it be a unix-only thing? I'm using the windows installer...? -----Original Message----- From: scipy-user-bounces at scipy.net [mailto:scipy-user-bounces at scipy.net] On Behalf Of Robert Kern Sent: 08 June 2006 18:18 To: SciPy Users List Subject: Re: [SciPy-user] cluster package removed / changed? Alexandre Guimond wrote: > Hello. It seems there used to be a cluster package in scipy which > contained a kmeans() function. I'm looking for that functionality, but I > can not find the cluster package in the most recent scipy (0.4.9), nor > can I find the kmeans function. Has it be removed? No, it's still there. Look in Lib/cluster/vq.py for kmeans. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user NOTICE: This e-mail message and all attachments transmitted with it may contain legally privileged and confidential information intended solely for the use of the addressee. If the reader of this message is not the intended recipient, you are hereby notified that any reading, dissemination, distribution, copying, or other use of this message or its attachments, hyperlinks, or any other files of any kind is strictly prohibited. If you have received this message in error, please notify the sender immediately by telephone (+44-1865-265500) or by a reply to this electronic mail message and delete this message and all copies and backups thereof. From robert.kern at gmail.com Thu Jun 8 12:40:25 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 08 Jun 2006 11:40:25 -0500 Subject: [SciPy-user] cluster package removed / changed? In-Reply-To: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B19987@oxfh5f1a> References: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B19987@oxfh5f1a> Message-ID: <448852F9.9060005@gmail.com> [Correcting top-posting] Alexandre Guimond wrote: > Robert Kern wrote: >> Alexandre Guimond wrote: >> >>>Hello. It seems there used to be a cluster package in scipy which >>>contained a kmeans() function. I'm looking for that functionality, but >> >> I >> >>>can not find the cluster package in the most recent scipy (0.4.9), nor >>>can I find the kmeans function. Has it be removed? >> >> No, it's still there. Look in Lib/cluster/vq.py for kmeans. > > Hum... not there on my computer... could it be a unix-only thing? I'm > using the windows installer...? Ah, it looks like Pearu commented it out of the setup.py file for some reason that isn't explained in the commit log (revision 1874). Pearu, was that just a mistaken commit, or is there a reason? If you found cluster to be nonfunctional, it should be noted on the Trac, and then the package should possibly removed to the sandbox. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Thu Jun 8 13:37:36 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 8 Jun 2006 13:37:36 -0400 Subject: [SciPy-user] cluster package In-Reply-To: <4487DC74.4040701@itc.it> References: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B198C2@oxfh5f1a><4487DC74.4040701@itc.it> Message-ID: On Thu, 08 Jun 2006, Emanuele Olivetti apparently wrote: > You could use instead pycluster library: > http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/cluster/software.htm#pycluster It seems the license is SciPy compatible, and that it would be an obvious candidate for inclusion in SciPy ... fwiw, Alan Isaac From robert.kern at gmail.com Thu Jun 8 14:32:46 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 08 Jun 2006 13:32:46 -0500 Subject: [SciPy-user] cluster package In-Reply-To: References: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B198C2@oxfh5f1a><4487DC74.4040701@itc.it> Message-ID: <44886D4E.6050705@gmail.com> Alan G Isaac wrote: > On Thu, 08 Jun 2006, Emanuele Olivetti apparently wrote: > >>You could use instead pycluster library: >>http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/cluster/software.htm#pycluster > > It seems the license is SciPy compatible, > and that it would be an obvious candidate for > inclusion in SciPy ... However, it includes the RANLIB library which (a) doesn't mesh well with the other random number facilities in numpy and scipy and (b) is not commercially redistributable. http://orion.math.iastate.edu/burkardt/c_src/ranlib/ranlib_intro.txt Why nobody ever reads the RANLIB license is a mystery to me. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Thu Jun 8 14:47:27 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 8 Jun 2006 14:47:27 -0400 Subject: [SciPy-user] cluster package In-Reply-To: <44886D4E.6050705@gmail.com> References: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B198C2@oxfh5f1a><4487DC74.4040701@itc.it> <44886D4E.6050705@gmail.com> Message-ID: >> On Thu, 08 Jun 2006, Emanuele Olivetti apparently wrote: >>> You could use instead pycluster library: >>> http://bonsai.ims.u-tokyo.ac.jp/~mdehoon/software/cluster/software.htm#pycluster > Alan G Isaac wrote: >> It seems the license is SciPy compatible, and that it >> would be an obvious candidate for inclusion in SciPy ... On Thu, 08 Jun 2006, Robert Kern apparently wrote: > However, it includes the RANLIB library which (a) doesn't mesh well with the > other random number facilities in numpy and scipy and (b) is not commercially > redistributable. > http://orion.math.iastate.edu/burkardt/c_src/ranlib/ranlib_intro.txt Would the RANLIB functionality be readily replaced by numpy or SciPy functionality? I do not see any evidence that Michiel de Hoon is intentionally restricting redistribution, so perhaps there is still "room to move" here? Cheers, Alan Isaac From robert.kern at gmail.com Thu Jun 8 14:46:57 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 08 Jun 2006 13:46:57 -0500 Subject: [SciPy-user] cluster package In-Reply-To: References: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B198C2@oxfh5f1a><4487DC74.4040701@itc.it> <44886D4E.6050705@gmail.com> Message-ID: <448870A1.8020406@gmail.com> Alan G Isaac wrote: > On Thu, 08 Jun 2006, Robert Kern apparently wrote: > >>However, it includes the RANLIB library which (a) doesn't mesh well with the >>other random number facilities in numpy and scipy and (b) is not commercially >>redistributable. >> http://orion.math.iastate.edu/burkardt/c_src/ranlib/ranlib_intro.txt > > Would the RANLIB functionality be readily replaced by numpy > or SciPy functionality? It's used relatively deep in the C code. The last time I considered it, I came to the conclusion that it would probably be easier and certainly more satisfying to simply port the algorithms to Python than to spend time refactoring it to use a better PRNG. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at ee.byu.edu Thu Jun 8 15:42:36 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 08 Jun 2006 13:42:36 -0600 Subject: [SciPy-user] bug fixes + changes to scipy.optimize In-Reply-To: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B198C0@oxfh5f1a> References: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B198C0@oxfh5f1a> Message-ID: <44887DAC.5070507@ee.byu.edu> Alexandre Guimond wrote: > Hello. > > I?ve fixed a small bug in scipy.optimize.brent where the maximum > number of iterations was not taken into account. > Thank you very much for the contribution. Could you file a ticket on the Trac pages and attach the patch there so we don't lose track of it? Thanks, -Travis From oliphant at ee.byu.edu Thu Jun 8 15:44:01 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 08 Jun 2006 13:44:01 -0600 Subject: [SciPy-user] Zeta function of complex argument In-Reply-To: <200606081134.01169.a.u.r.e.l.i.a.n@gmx.net> References: <200606081134.01169.a.u.r.e.l.i.a.n@gmx.net> Message-ID: <44887E01.5020600@ee.byu.edu> Johannes Loehnert wrote: >I was asked to forward this, since Waltraut has problems with sending mail to >the lists. But she is able to read your answers. > >Johannes > >------------------- > >Hi, > >can anybody tell me, why the Rieman zeta function does not eat complex >variables? > > There is no underlying implementation for complex numbers. It could be added if somebody will submit a patch. -Travis From oliphant at ee.byu.edu Thu Jun 8 15:50:21 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 08 Jun 2006 13:50:21 -0600 Subject: [SciPy-user] future-safe saving of numpy arrays? In-Reply-To: <44883E79.405@staffmail.ed.ac.uk> References: <44883E79.405@staffmail.ed.ac.uk> Message-ID: <44887F7D.2000708@ee.byu.edu> Cory Davis wrote: > Hi All, > > I have had some trouble with my data and changes to numpy over time. > I often want to save both single arrays and complicated objects with > arrays as data members. Until now I have almost always used cPickle. > But this can cause problems when I upgrade numpy/scipy, when I can no > longer unpickle data saved using older versions. Unfortunately, there were some bugs in the NumPy reduce implementation that required small changes. Post 1.0, there will not be significant changes made to pickle that cause old pickles not to load (I don't seen any changes happening from now on, frankly). This is definitely a growing pain of the pre-1.0 release. > Does anyone have any suggestions on avoiding this problem? A problem with Pickle generally, is that if you pickle objects requiring specific modules, then any name changes in those modules will cause difficulties with loading (most of these problems can be worked around --- often trivially), but it does get to be a pain for long-term persistence with Pickle. Using PyTables is probably a better idea. -Travis From cookedm at physics.mcmaster.ca Thu Jun 8 16:53:51 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 8 Jun 2006 16:53:51 -0400 Subject: [SciPy-user] future-safe saving of numpy arrays? In-Reply-To: <44887F7D.2000708@ee.byu.edu> References: <44883E79.405@staffmail.ed.ac.uk> <44887F7D.2000708@ee.byu.edu> Message-ID: <20060608165351.276a982b@arbutus.physics.mcmaster.ca> On Thu, 08 Jun 2006 13:50:21 -0600 Travis Oliphant wrote: > Cory Davis wrote: > > > Hi All, > > > > I have had some trouble with my data and changes to numpy over time. > > I often want to save both single arrays and complicated objects with > > arrays as data members. Until now I have almost always used cPickle. > > But this can cause problems when I upgrade numpy/scipy, when I can no > > longer unpickle data saved using older versions. > > Unfortunately, there were some bugs in the NumPy reduce implementation > that required small changes. Post 1.0, there will not be significant > changes made to pickle that cause old pickles not to load (I don't seen > any changes happening from now on, frankly). This is definitely a > growing pain of the pre-1.0 release. > > > Does anyone have any suggestions on avoiding this problem? > > A problem with Pickle generally, is that if you pickle objects requiring > specific modules, then any name changes in those modules will cause > difficulties with loading (most of these problems can be worked around > --- often trivially), but it does get to be a pain for long-term > persistence with Pickle. Using PyTables is probably a better idea. Looking at the pickle output (pickletools.dis is good for this), there's three names it needs: - the unpickle function: numpy.core._internal._reconstruct - numpy.ndarray - numpy.dtype The last two names are unlikely to change (although their representations may). As long as we as developers are careful, the first one is ok too. I would suggest adding a version number to the ndarray and dtype pickles, so that if we need to change the format post-1.0, we could still handle the old ones (or at least warn about them). [and also to the scalar types.] Looks like this can be added to the current code, while still being able to read current pickles. I'll add it sometime soon. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From chris.lasher at gmail.com Thu Jun 8 17:57:56 2006 From: chris.lasher at gmail.com (Chris Lasher) Date: Thu, 8 Jun 2006 17:57:56 -0400 Subject: [SciPy-user] Triangular Matrix, Matrix Manipulations Message-ID: <128a885f0606081457m5f547bdi6e127185fadf7dd3@mail.gmail.com> Hi all, I am unsure whether my question is more appropriate for the SciPy or NumPy discussion list, but I'm hoping I chose wisely; if not, please feel free to correct me and I'll redirect the email. I am interested in learning if a triangular matrix (lower or upper, does not matter to me) structure exists in SciPy or NumPy. I'd like to work with DNA or protein distance matrices, which are symmetric about the central axis ([0][0] to [n][n]) and such a structure might help greatly. The main two things I am looking to do are: * Tally the number of entries in a matrix that are equal to or greater/less than a specified value. * Randomly shuffle the values in a matrix. I'm interested in using an approach with these libraries to keep memory usage and execution times lower, as some of these distance matrices can be thousands x thousands of sequences. A triangular matrix structure might be nice to halve the amount of iteration and also make it easier to randomly shuffle the values (since a square matrix requires the values to be shuffled symmetrically, or to be built from scratch one value at a time). My questions and approaches may be really naive, and I apologize in advance for that. My background is in the biological sciences, but I am openly ignorant about linear algebra/matrix-based mathematics. I have been programming in Python for two years now, but have no experience with SciPy and NumPy/Numeric/NumArray. I have searched through the NumPy documentation, which I purchased, as well as hunted through the SciPy API documentation, but I either have not looked in the right place or did not register what I was reading was what I was looking for. These aren't excuses for not knowing the proper way to go about this, I just want to be candid about my level of knowledge. Thanks very much in advance for your helpful comments and suggestions, as I do truly appreciate them. Chris Lasher From robert.kern at gmail.com Thu Jun 8 18:13:38 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 08 Jun 2006 17:13:38 -0500 Subject: [SciPy-user] Triangular Matrix, Matrix Manipulations In-Reply-To: <128a885f0606081457m5f547bdi6e127185fadf7dd3@mail.gmail.com> References: <128a885f0606081457m5f547bdi6e127185fadf7dd3@mail.gmail.com> Message-ID: <4488A112.3000005@gmail.com> Chris Lasher wrote: > Hi all, > > I am unsure whether my question is more appropriate for the SciPy or > NumPy discussion list, but I'm hoping I chose wisely; if not, please > feel free to correct me and I'll redirect the email. > > I am interested in learning if a triangular matrix (lower or upper, > does not matter to me) structure exists in SciPy or NumPy. I'd like to > work with DNA or protein distance matrices, which are symmetric about > the central axis ([0][0] to [n][n]) and such a structure might help > greatly. I don't believe so. I've thought for several years now that a great addition to scipy.sparse would be "sparse" matrices that use the various "packed" storage schemes from LAPACK, including symmetric matrices. It would take a fair bit of work to get the details right, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cmutel at gmail.com Thu Jun 8 23:13:40 2006 From: cmutel at gmail.com (Christopher Mutel) Date: Thu, 8 Jun 2006 22:13:40 -0500 Subject: [SciPy-user] Possible bug in stats.describe Message-ID: <5e5978e10606082013x70f065edhb7da12c82bab25e0@mail.gmail.com> Hello. I am using the Ubuntu pre-packaged version of SciPy (0.3.2-3ubuntu2). In my version, as best as I can tell, the stats.describe function is incorrect. It gives the variance of a sample, not the standard deviation. Could someone confirm that I am reading this right? >From my stats.py: def describe(a,axis=-1): """Returns several descriptive statistics of the passed array. Axis can equal None (ravel array first), or an integer (the axis over which to operate) Returns: n, (min,max), mean, standard deviation, skew, kurtosis """ a, axis = _chk_asarray(a, axis) n = a.shape[axis] mm = (minimum.reduce(a),maximum.reduce(a)) m = mean(a,axis) v = var(a,axis) sk = skew(a,axis) kurt = kurtosis(a,axis) return n, mm, m, v, sk, kurt This is the first time I have ever suspected a bug, so maybe I am doing this all wrong. Or has this been noticed already and fixed? -Chris Mutel From robert.kern at gmail.com Thu Jun 8 23:42:20 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 08 Jun 2006 22:42:20 -0500 Subject: [SciPy-user] Possible bug in stats.describe In-Reply-To: <5e5978e10606082013x70f065edhb7da12c82bab25e0@mail.gmail.com> References: <5e5978e10606082013x70f065edhb7da12c82bab25e0@mail.gmail.com> Message-ID: <4488EE1C.7040107@gmail.com> Christopher Mutel wrote: > Hello. > > I am using the Ubuntu pre-packaged version of SciPy (0.3.2-3ubuntu2). > In my version, as best as I can tell, the stats.describe function is > incorrect. It gives the variance of a sample, not the standard > deviation. Could someone confirm that I am reading this right? > This is the first time I have ever suspected a bug, so maybe I am > doing this all wrong. Or has this been noticed already and fixed? Yes, it has been noticed and the docstring fixed. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From schofield at ftw.at Fri Jun 9 02:31:20 2006 From: schofield at ftw.at (Ed Schofield) Date: Fri, 9 Jun 2006 08:31:20 +0200 Subject: [SciPy-user] Triangular Matrix, Matrix Manipulations In-Reply-To: <128a885f0606081457m5f547bdi6e127185fadf7dd3@mail.gmail.com> References: <128a885f0606081457m5f547bdi6e127185fadf7dd3@mail.gmail.com> Message-ID: <3E14F3FF-1E69-4385-94E0-9EC3CAAE46E7@ftw.at> On 08/06/2006, at 11:57 PM, Chris Lasher wrote: > Hi all, > > ... > > I am interested in learning if a triangular matrix (lower or upper, > does not matter to me) structure exists in SciPy or NumPy. I'd like to > work with DNA or protein distance matrices, which are symmetric about > the central axis ([0][0] to [n][n]) and such a structure might help > greatly. > > The main two things I am looking to do are: > * Tally the number of entries in a matrix that are equal to or > greater/less than a specified value. > * Randomly shuffle the values in a matrix. > > I'm interested in using an approach with these libraries to keep > memory usage and execution times lower, as some of these distance > matrices can be thousands x thousands of sequences. A triangular > matrix structure might be nice to halve the amount of iteration and > also make it easier to randomly shuffle the values (since a square > matrix requires the values to be shuffled symmetrically, or to be > built from scratch one value at a time). Are the upper triangles of your matrices sparse? If not, I suggest you use dense arrays, since the memory overhead will only be a factor of two and implementing it will be easier. SciPy doesn't currently have a symmetric sparse matrix format, but this probably wouldn't help with your problem anyway; the CSR or CSC type should serve you just as well. The first point you mentioned is easy to support with these types, but the second could be problematic unless the shuffle keeps the zeros in the same places ... -- Ed From alexandre.guimond at mirada-solutions.com Fri Jun 9 03:43:45 2006 From: alexandre.guimond at mirada-solutions.com (Alexandre Guimond) Date: Fri, 9 Jun 2006 08:43:45 +0100 Subject: [SciPy-user] bug fixes + changes to scipy.optimize Message-ID: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B199A7@oxfh5f1a> Done. Ticket #207 Alex. -----Original Message----- From: scipy-user-bounces at scipy.net [mailto:scipy-user-bounces at scipy.net] On Behalf Of Travis Oliphant Sent: 08 June 2006 21:43 To: SciPy Users List Subject: Re: [SciPy-user] bug fixes + changes to scipy.optimize Alexandre Guimond wrote: > Hello. > > I've fixed a small bug in scipy.optimize.brent where the maximum > number of iterations was not taken into account. > Thank you very much for the contribution. Could you file a ticket on the Trac pages and attach the patch there so we don't lose track of it? Thanks, -Travis _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user NOTICE: This e-mail message and all attachments transmitted with it may contain legally privileged and confidential information intended solely for the use of the addressee. If the reader of this message is not the intended recipient, you are hereby notified that any reading, dissemination, distribution, copying, or other use of this message or its attachments, hyperlinks, or any other files of any kind is strictly prohibited. If you have received this message in error, please notify the sender immediately by telephone (+44-1865-265500) or by a reply to this electronic mail message and delete this message and all copies and backups thereof. From olivetti at itc.it Fri Jun 9 04:05:13 2006 From: olivetti at itc.it (Emanuele Olivetti) Date: Fri, 09 Jun 2006 10:05:13 +0200 Subject: [SciPy-user] cluster package In-Reply-To: <44886D4E.6050705@gmail.com> References: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B198C2@oxfh5f1a><4487DC74.4040701@itc.it> <44886D4E.6050705@gmail.com> Message-ID: <44892BB9.7040806@itc.it> Robert Kern wrote: > http://orion.math.iastate.edu/burkardt/c_src/ranlib/ranlib_intro.txt > > Why nobody ever reads the RANLIB license is a mystery to me. > Thanks a lot for mentioning ranlib's license problems. I completely missed it. I'm investigating into Pycluster to see which parts of ranlib are actually used. As far as I read from ranlib's license some code is public domain and some other is ACM restrictive license (source code seems clear enough to understand which part is one license and which part is the other one). If you have suggestions on this point please let me know. Emanuele From victor.martinez at uib.es Fri Jun 9 05:04:59 2006 From: victor.martinez at uib.es (=?ISO-8859-1?Q?V=EDctor_Mart=EDnez-Moll?=) Date: Fri, 09 Jun 2006 11:04:59 +0200 Subject: [SciPy-user] Problems with odeint Message-ID: <448939BB.6030103@uib.es> Hi all, I've been a SciLab user for some time and I'm evaluating SciPy as a development tool. The first thing I tried is to solve a simple second order diferential equation using odeint(). The problem is that depending on the function I want to integrate I get nice results, but for most of them I get simply nothing or nonsense answers. Is not a problem of the function having a strange behaviour or having singularity points. For example if I try to solve: d2y/dt2 = 1-sin(y) either I get nothing or wrong solutions (the best thing I got was setting:hmin=0.01,atol=.001), while If I do about the same procedure in SciLab I get a nice and smooth set of curves. The strangest thing is that if I use exactly the same procedure to solve: d2y/dt2 = 1-y then I get the right solution, which seems to indicate that I'm doing the right thing (although of course I know I'm not because I do not belive that odeint is not able to solve such a silly thing). I've only checked it with the last enthon distribution I found: enthon-python2.4-1.0.0.beta2.exe The simple procedure I wrote in Python and its equivalent in SciLab that does the right thing in are: ###################################### ## The Python one ## ###################################### from scipy import * from matplotlib.pylab import * def dwdt(w,t): return [w[1],1.0-sin(w[0])] t=arange(0.0,2.0*pi,.01) ww = integrate.odeint(dwdt,[0.0,0.0],t,hmin=0.01,atol=.001) y = ww[:,0] dy =ww[:,1] ddy = 1.0-sin(ww[:,0]) plot(t,y,label='y') plot(t,dy,label='dy') plot(t,ddy,label='ddy') legend() show() ###################################### ## The SciLab one ## ###################################### function result = dwdt(t,w) result(1) = w(2) result(2) = 1.0-sin(w(1)) endfunction t = 0.0:0.01:2.0*%pi; ww = ode([0.0;0.0],0.0,t,dwdt); y = ww(1,:); dy = ww(2,:); ddy=1.0-sin(ww(1,:)); xset("window", 0) xbasc() plot2d(t,[y;dy;ddy]',leg="y at dy@ddy") ######################################## Any ideas or suggestions will be wellcome. -- V?ctor Mart?nez Moll | Universitat de les Illes Balears Departament de F?sica | Edifici Mateu Orfila ?rea d'Enginyeria Mec?nica | E-07122, Palma de Mallorca, SPAIN e-mail: victor.martinez at uib.es | Tel:34-971171374 Fax:34-971173426 From jstrunk at enthought.com Mon Jun 12 10:58:14 2006 From: jstrunk at enthought.com (Jeff Strunk) Date: Mon, 12 Jun 2006 09:58:14 -0500 Subject: [SciPy-user] Mailing list is back Message-ID: <200606120958.14794.jstrunk@enthought.com> Good morning, On Friday, the /home partition on old.scipy.org(this still handles mail.) filled up. Since that is where Mailman was storing messages, some messages sent over the weekend may need to be resent. Sorry for the inconvenience, Jeff Strunk IT Administrator Enthought Inc. From olivetti at itc.it Mon Jun 12 11:06:57 2006 From: olivetti at itc.it (Emanuele Olivetti) Date: Mon, 12 Jun 2006 17:06:57 +0200 Subject: [SciPy-user] cluster package In-Reply-To: <44892BB9.7040806@itc.it> References: <4926A5BE4AFE7C4A83D5CF5CDA7B775401B198C2@oxfh5f1a><4487DC74.4040701@itc.it> <44886D4E.6050705@gmail.com> <44892BB9.7040806@itc.it> Message-ID: <448D8311.6050705@itc.it> From a preliminary invstigation this is what I got. Pycluster relies on C clustering library. The C clustering library uses ranlib. According to pycluster website "The C clustering library and Pycluster were released under the Python License."[*]. That's good for Scipy. Ranlib has a mixed license: ACM restrictive for some functions and public domain for the rest of the code. In particular ACM license is not good for Scipy. Ranlib is called only inside Pycluster's 'cluster.c', and exactly here: cluster.c:1359: setall (iseed1, iseed2); cluster.c:1399: genprm (map, nelements); cluster.c:1407: clusterid[map[i]] = ignuin (0,nclusters-1); cluster.c:3161: { double term = genunf(-1.,1.); cluster.c:3173: genprm (index, nelements); So there are just 4 functions used from ranlib: 1) setall() : initialization of the generator (ACM restrictive license, see ranlib's com.c) 2) ignuin() : generates an integer uniformly distributed, that uses ranlib's ignlgi() that has ACM restrictive license 3) genunf() : generates a real uniformly distributed, that uses ranlib's ranf() that has ACM restrictive license 4) genprm() : generate random permutation, that uses ranlib's ignuin() It seems that handling those 5 function calls is enough to separate Pycluster from ranlib and ACM restrictive license. Since the 4 ranlib functions are just used in Pycluster's cluster.c in 'randomassign' function (used in k-means and k-medians hi-level function) and in 'somworker' function (called in somcluster hi-level function), it semms not that difficult to call another, more friendly, RNG library. Which libraries could substitute ranlib for Pycluster? As far as I understand there aren't big performace need related to Pycluster's use of ranlib. Observations/Corrections/Suggestions are welcome! Emanuele [*]: Note that this is not exactly what it's written inside the source package, where there a standard BSD-like license (see cluster.c), whose text has more or less the same menaning of the Python license but slightly different words (a question: to which version of Python do they refer to? There was a non trivial evolution of that license during last years...). Anyway we can say that Pycluster sources, except ranlib* are BSD-like. Emanuele Olivetti wrote: > Robert Kern wrote: >> http://orion.math.iastate.edu/burkardt/c_src/ranlib/ranlib_intro.txt >> >> Why nobody ever reads the RANLIB license is a mystery to me. >> > > Thanks a lot for mentioning ranlib's license problems. I completely missed it. > I'm investigating into Pycluster to see which parts of ranlib are actually used. > As far as I read from ranlib's license some code is public domain and some other > is ACM restrictive license (source code seems clear enough to understand which > part is one license and which part is the other one). > > If you have suggestions on this point please let me know. > > Emanuele > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From bblais at bryant.edu Mon Jun 12 11:13:20 2006 From: bblais at bryant.edu (Brian Blais) Date: Mon, 12 Jun 2006 11:13:20 -0400 Subject: [SciPy-user] scipy.io.loadmat can't handle structs from octave Message-ID: <448D8490.4080105@bryant.edu> (If this comes twice, please forgive. I sent this earlier, and it didn't appear) Hello, I am trying to load some .mat files in python, that were saved with octave. I get some weird things with strings, and structs fail altogether. Am I doing something wrong? Python 2.4, Scipy '0.4.9.1906', numpy 0.9.8, octave 2.1.71, running Linux. thanks, Brian Blais here is what I tried: Numbers are ok: ========OCTAVE========== >> a=rand(4) a = 0.617860 0.884195 0.032998 0.217922 0.207970 0.753992 0.333966 0.905661 0.048432 0.290895 0.353919 0.958442 0.697213 0.616851 0.426595 0.371364 >> save -mat-binary pythonfile.mat a =========PYTHON=========== In [13]:d=io.loadmat('pythonfile.mat') In [14]:d Out[14]: {'__header__': 'MATLAB 5.0 MAT-file, written by Octave 2.1.71, 2006-06-09 14:23:54 UTC', '__version__': '1.0', 'a': array([[ 0.61785957, 0.88419484, 0.03299807, 0.21792207], [ 0.20796989, 0.75399171, 0.33396634, 0.90566095], [ 0.04843219, 0.29089527, 0.35391921, 0.95844178], [ 0.69721313, 0.61685075, 0.42659485, 0.37136358]])} Strings are weird (turns to all 1's) ========OCTAVE========== >> a='hello' a = hello >> save -mat-binary pythonfile.mat a =========PYTHON=========== In [15]:d=io.loadmat('pythonfile.mat') In [16]:d Out[16]: {'__header__': 'MATLAB 5.0 MAT-file, written by Octave 2.1.71, 2006-06-09 14:24:13 UTC', '__version__': '1.0', 'a': '11111'} Cell arrays are fine (except for strings): ========OCTAVE========== >> a={5 [1,2,3] 'this'} a = { [1,1] = 5 [1,2] = 1 2 3 [1,3] = this } >> save -mat-binary pythonfile.mat a =========PYTHON=========== In [17]:d=io.loadmat('pythonfile.mat') In [18]:d Out[18]: {'__header__': 'MATLAB 5.0 MAT-file, written by Octave 2.1.71, 2006-06-09 14:24:51 UTC', '__version__': '1.0', 'a': array([5.0, [ 1. 2. 3.], 1111], dtype=object)} Structs crash: ========OCTAVE========== >> clear a >> a.hello=5 a = { hello = 5 } >> a.this=[1,2,3] a = { hello = 5 this = 1 2 3 } >> save -mat-binary pythonfile.mat a =========PYTHON=========== In [19]:d=io.loadmat('pythonfile.mat') --------------------------------------------------------------------------- exceptions.AttributeError Traceback (most recent call last) /home/bblais/octave/work/mouse/ /usr/lib/python2.4/site-packages/scipy/io/mio.py in loadmat(name, dict, appendmat, basename) 751 if not (0 in test_vals): # MATLAB version 5 format 752 fid.rewind() --> 753 thisdict = _loadv5(fid,basename) 754 if dict is not None: 755 dict.update(thisdict) /usr/lib/python2.4/site-packages/scipy/io/mio.py in _loadv5(fid, basename) 688 try: 689 var = var + 1 --> 690 el, varname = _get_element(fid) 691 if varname is None: 692 varname = '%s_%04d' % (basename,var) /usr/lib/python2.4/site-packages/scipy/io/mio.py in _get_element(fid) 676 677 # handle miMatrix type --> 678 el, name = _parse_mimatrix(fid,numbytes) 679 return el, name 680 /usr/lib/python2.4/site-packages/scipy/io/mio.py in _parse_mimatrix(fid, bytes) 597 result[i].__dict__[element] = val 598 result = squeeze(transpose(reshape(result,tupdims))) --> 599 if rank(result)==0: result = result.item() 600 601 # object is like a structure with but with a class name AttributeError: mat_struct instance has no attribute 'item' -- ----------------- bblais at bryant.edu http://web.bryant.edu/~bblais From chanley at stsci.edu Mon Jun 12 11:15:20 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Mon, 12 Jun 2006 11:15:20 -0400 Subject: [SciPy-user] PYFITS 1.1 "BETA 2" RELEASE Message-ID: <448D8508.4040709@stsci.edu> ------------------ | PYFITS Release | ------------------ Space Telescope Science Institute is pleased to announce the "beta 2" release of PyFITS 1.1. This release includes support for both the NUMPY and NUMARRAY array packages. This software can be downloaded at: http://www.stsci.edu/resources/software_hardware/pyfits/Download The NUMPY support in PyFITS is not nearly as well tested as the NUMARRAY support. We expect that you will encounter bugs. Please send bug reports to "help at stsci.edu". We intend to support NUMARRAY and NUMPY simultaneously for a transition period of no less than 1 year. Eventually, however, support for NUMARRAY will disappear. During this period, it is likely that new features will appear only for NUMPY. The support for NUMARRAY will primarily be to fix serious bugs and handle platform updates. ----------- | Version | ----------- Version 1.1b2; June 12, 2006 ------------------------------- | Major Changes since v1.1b | ------------------------------- * Corrected problem that prevented the writing of binary tables from "little endian" machines. * Fixed bug in the _ImageBaseHDU class of the NP_pyfits module that assumed all images were 2 dimensional. * Added the "names" attribute to the FITS_rec class in the NP_pyfits module. This provides a backward compatible interface to the numarray version of FITS_rec objects. ------------------------- | Software Requirements | ------------------------- PyFITS Version 1.1b2 REQUIRES: * Python 2.3 or later * NUMPY 0.9.8 or NUMARRAY --------------------- | Installing PyFITS | --------------------- PyFITS 1.1b is distributed as a Python distutils module. Installation simply involves unpacking the package and executing % python setup.py install to install it in Python's site-packages directory. Alternatively the command %python setup.py install --local="/destination/directory/" will install PyFITS in an arbitrary directory which should be placed on PYTHONPATH. Once numarray or numpy has been installed, then PyFITS should be available for use under Python. ----------------- | Download Site | ----------------- http://www.stsci.edu/resources/software_hardware/pyfits/Download ---------- | Usage | ---------- Users will issue an "import pyfits" command as in the past. However, the use of the NUMPY or NUMARRAY version of PyFITS will be controlled by an environment variable called NUMERIX. Set NUMERIX to 'numarray' for the NUMARRAY version of PyFITS. Set NUMERIX to 'numpy' for the NUMPY version of pyfits. If only one array package is installed, that package's version of PyFITS will be imported. If both packages are installed the NUMERIX value is used to decide which version to import. If no NUMERIX value is set then the NUMARRAY version of PyFITS will be imported. Anything else will raise an exception upon import. --------------- | Bug Reports | --------------- Please send all PyFITS bug reports to help at stsci.edu ------------------ | Advanced Users | ------------------ Users who would like the "bleeding" edge of PyFITS can retrieve the software from our SUBVERSION repository hosted at: http://astropy.scipy.org/svn/pyfits/trunk We also provide a Trac site at: http://projects.scipy.org/astropy/pyfits/wiki -- Christopher Hanley Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From maik.troemel at maitro.net Mon Jun 12 12:19:29 2006 From: maik.troemel at maitro.net (=?ISO-8859-1?Q?Maik_Tr=F6mel?=) Date: Mon, 12 Jun 2006 18:19:29 +0200 Subject: [SciPy-user] dtype Message-ID: <448D9411.50400@maitro.net> Hello list, i've got a question concerning datatypes. I'm importing a file filled with Data with the lenght of 2 bytes via arrh = numpy.fromstring(radFile, dtype = 'S2', count = 900*900) Now I want to convert the data via value = ord(arr[n][m][0]) * 256 + ord(arr[n][m][1]) But if the value in the array is NULL i get an Error: IndexError: string index out of range So I think I have to choose another dtype. But I don't have any idea which one. Probably someone has an idea. Thanks for support. Greetings Maik From robert.kern at gmail.com Mon Jun 12 12:31:09 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 12 Jun 2006 11:31:09 -0500 Subject: [SciPy-user] dtype In-Reply-To: <448D9411.50400@maitro.net> References: <448D9411.50400@maitro.net> Message-ID: <448D96CD.8060701@gmail.com> Maik Tr?mel wrote: > Hello list, > > i've got a question concerning datatypes. > I'm importing a file filled with Data with the lenght of 2 bytes via > > arrh = numpy.fromstring(radFile, dtype = 'S2', count = 900*900) > > Now I want to convert the data via > > value = ord(arr[n][m][0]) * 256 + ord(arr[n][m][1]) > > But if the value in the array is NULL i get an Error: > > IndexError: string index out of range (A) In order for us to help you debug some code, you need to distill it down to the smallest code that actually displays the error, and then provide us the actual code. (B) In this case, there's a better way. See below. > So I think I have to choose another dtype. But I don't have any idea > which one. > Probably someone has an idea. arrh = numpy.fromstring(radFile, dtype=numpy.int16, count=900*900) You don't need to do any other conversions. You may have to pay attention to byteswapping depending on the endianness of your platform, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nvf at MIT.EDU Mon Jun 12 12:44:36 2006 From: nvf at MIT.EDU (Nick Fotopoulos) Date: Mon, 12 Jun 2006 12:44:36 -0400 Subject: [SciPy-user] scipy.io.loadmat can't handle structs from octave In-Reply-To: References: Message-ID: <448D99F4.2060600@mit.edu> From: Brian Blais > (If this comes twice, please forgive. I sent this earlier, and it > didn't appear) > > Hello, > > I am trying to load some .mat files in python, that were saved with > octave. I get some weird things with strings, and structs fail > altogether. Am I doing something wrong? Python 2.4, Scipy > '0.4.9.1906', numpy 0.9.8, octave 2.1.71, running Linux. > > thanks, > > Brian Blais Brian, This is a bug with how loadmat handles mat-file strings in general. I fixed it in ticket #14 (http://projects.scipy.org/scipy/scipy/ticket/14), but it has not yet been checked into SVN (I am not a dev). After backing up mio.py, you can drop the two files attached to the ticket right into your site-packages/scipy/io directory. Let me know if it works for you. Take care, Nick From bblais at bryant.edu Mon Jun 12 13:47:47 2006 From: bblais at bryant.edu (Brian Blais) Date: Mon, 12 Jun 2006 13:47:47 -0400 Subject: [SciPy-user] scipy.io.loadmat can't handle structs from octave In-Reply-To: <448D99F4.2060600@mit.edu> References: <448D99F4.2060600@mit.edu> Message-ID: <448DA8C3.30503@bryant.edu> Nick Fotopoulos wrote: > > This is a bug with how loadmat handles mat-file strings in general. I > fixed it in ticket #14 > (http://projects.scipy.org/scipy/scipy/ticket/14), but it has not yet > been checked into SVN (I am not a dev). > > After backing up mio.py, you can drop the two files attached to the > ticket right into your site-packages/scipy/io directory. Let me know if > it works for you. > thanks, this fixes the string problem, but it still crashes on loading any struct. bb -- ----------------- bblais at bryant.edu http://web.bryant.edu/~bblais From e.buddy.damm at timken.com Mon Jun 12 13:52:37 2006 From: e.buddy.damm at timken.com (Buddy Damm) Date: Mon, 12 Jun 2006 13:52:37 -0400 Subject: [SciPy-user] SciPy - ACML - PGI compiler. Getting a newbie started Message-ID: <1150134757.19303.15.camel@damme-lnx.corp.timken.com> Hi, I need to have SciPy 0.3.x to improve efficiency for FiPy. I'm having trouble, and have tried several things. My machine is Suse 10.0 linux, 64 bit. I have the SciPy_complete-0.3.2 tar ball, and have tried to modify my site.cfg file as shown below. Since I have acml math libraries I don't have atlas, blas, or lapack. I am also running the Portland Group workstation compilers. Below is further information on my site.cfg and the errors I am getting Any help to get the installation to work would be greatly appreciated. Thanks! Here is the meat of my site.cfg, [DEFAULT] library_dirs = /usr/lib:/opt/acml3.0.0/pgi64/lib include_dirs = /opt/acml3.0.0/pgi64/include [atlas] atlas_libs = acml language = pgf77 [lapack] lapack_libs = acml language = pgf77 [lapack_src] # src_dirs = /opt/acml3.0.0/pgi64/include [blas] blas_libs = acml language = pgf77 Here are the errors when I run $ python setup.py build build_flib --fcompiler=pgf77 install ERRORS/NOTES fftw_info: NOT AVAILABLE dfftw_info: NOT AVAILABLE FFTW (http://www.fftw.org/) libraries not found. Directories to search for the libraries can be specified in the scipy_distutils/site.cfg file (section [fftw]) or by setting the FFTW environment variable. djbfft_info: NOT AVAILABLE DJBFFT (http://cr.yp.to/djbfft.html) libraries not found. Directories to search for the libraries can be specified in the scipy_distutils/site.cfg file (section [djbfft]) or by setting the DJBFFT environment variable. blas_opt_info: atlas_blas_threads_info: scipy_distutils.system_info.atlas_blas_threads_info NOT AVAILABLE atlas_blas_info: scipy_distutils.system_info.atlas_blas_info NOT AVAILABLE scipy_core/scipy_distutils/system_info.py:982: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the scipy_distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: NOT AVAILABLE scipy_core/scipy_distutils/system_info.py:991: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the scipy_distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) blas_src_info: NOT AVAILABLE scipy_core/scipy_distutils/system_info.py:994: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the scipy_distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) NOT AVAILABLE Traceback (most recent call last): File "setup.py", line 111, in ? setup_package(ignore_packages) File "setup.py", line 85, in setup_package ignore_packages = ignore_packages) File "scipy_core/scipy_distutils/misc_util.py", line 475, in get_subpackages config = setup_module.configuration(*args) File "/home/damme/DownLoads/fipy/SciPy_complete-0.3.2/Lib/integrate/setup_integrate.py", line 22, in configuration raise NotFoundError,'no blas resources found' scipy_distutils.system_info.NotFoundError: no blas resources found -- E. Buddy Damm ph. 330-471-2703 e.buddy.damm at timken.com ----------------------------------------- This message and any attachments are intended for the individual or entity named above. If you are not the intended recipient, please do not forward, copy, print, use or disclose this communication to others; also please notify the sender by replying to this message, and then delete it from your system. The Timken Company / The Timken Corporation From thetimin at gmail.com Mon Jun 12 15:51:24 2006 From: thetimin at gmail.com (Mitchell Timin) Date: Mon, 12 Jun 2006 12:51:24 -0700 Subject: [SciPy-user] ANN: Metavolv.py released - thanks to you guys Message-ID: <9a2f1cdd0606121251r52d93c81n61e17effa32002ba@mail.gmail.com> Numpy.linalg.lstsq() worked out very well for me, and is a key part of my just-released meta-evolver. Are attachments allowed on this list? If so I can attach the HTML doc file. (and it has one .png diagram) Otherwise, you can read a brief description at http://sourceforge.net/forum/forum.php?forum_id=577967 Mitchell Timin -- I'm proud of http://ANNEvolve.sourceforge.net. If you want to write software, or articles, or do testing or research for ANNEvolve, let me know. From nvf at MIT.EDU Mon Jun 12 15:53:07 2006 From: nvf at MIT.EDU (Nick Fotopoulos) Date: Mon, 12 Jun 2006 15:53:07 -0400 Subject: [SciPy-user] scipy.io.loadmat can't handle structs from octave In-Reply-To: <448DA8C3.30503@bryant.edu> References: <448D99F4.2060600@mit.edu> <448DA8C3.30503@bryant.edu> Message-ID: <5833813C-18BD-4854-B933-37CE78D58A0F@mit.edu> On Jun 12, 2006, at 1:47 PM, Brian Blais wrote: > Nick Fotopoulos wrote: >> This is a bug with how loadmat handles mat-file strings in >> general. I fixed it in ticket #14 (http://projects.scipy.org/ >> scipy/scipy/ticket/14), but it has not yet been checked into SVN >> (I am not a dev). >> After backing up mio.py, you can drop the two files attached to >> the ticket right into your site-packages/scipy/io directory. Let >> me know if it works for you. > > > thanks, > > > this fixes the string problem, but it still crashes on loading any > struct. > Ah, I had never needed structs before, so have never been through this particular code path. I have reproduced the problem with Matlab- generated matfiles, so now we know that it's not Octave-specific. I removed the offending line that causes the crash and it's clear to me that there's a deeper issue. I poked around for several minutes and didn't see the root of the problem. Unfortunately, I don't have time to investigate any more deeply. My thesis deadline looms. Sorry I can't be of greater help. Take care, Nick From lev at columbia.edu Mon Jun 12 16:22:47 2006 From: lev at columbia.edu (Lev Givon) Date: Mon, 12 Jun 2006 16:22:47 -0400 Subject: [SciPy-user] cluster package In-Reply-To: <448D8311.6050705@itc.it> References: <44886D4E.6050705@gmail.com> <44892BB9.7040806@itc.it> <448D8311.6050705@itc.it> Message-ID: <20060612202247.GA15339@avicenna.cc.columbia.edu> Received from Emanuele Olivetti on Mon, Jun 12, 2006 at 11:06:57AM EDT: > From a preliminary invstigation this is what I got. > > Pycluster relies on C clustering library. The C clustering library > uses ranlib. > > According to pycluster website "The C clustering library and Pycluster > were released under the Python License."[*]. That's good for Scipy. > > Ranlib has a mixed license: ACM restrictive for some functions and > public domain for the rest of the code. In particular ACM license is > not good for Scipy. > > Ranlib is called only inside Pycluster's 'cluster.c', and exactly > here: > cluster.c:1359: setall (iseed1, iseed2); > cluster.c:1399: genprm (map, nelements); > cluster.c:1407: clusterid[map[i]] = ignuin (0,nclusters-1); > cluster.c:3161: { double term = genunf(-1.,1.); > cluster.c:3173: genprm (index, nelements); > > > So there are just 4 functions used from ranlib: > 1) setall() : initialization of the generator (ACM restrictive > license, see ranlib's com.c) > 2) ignuin() : generates an integer uniformly distributed, that uses > ranlib's ignlgi() that has ACM restrictive license > 3) genunf() : generates a real uniformly distributed, that uses > ranlib's ranf() that has ACM restrictive license > 4) genprm() : generate random permutation, that uses ranlib's ignuin() > > It seems that handling those 5 function calls is enough to separate > Pycluster from ranlib and ACM restrictive license. Since the 4 ranlib > functions are just used in Pycluster's cluster.c in 'randomassign' > function (used in k-means and k-medians hi-level function) and in > 'somworker' function (called in somcluster hi-level function), > it semms not that difficult to call another, more friendly, RNG library. > > Which libraries could substitute ranlib for Pycluster? As far as I > understand there aren't big performace need related to Pycluster's use > of ranlib. > > Observations/Corrections/Suggestions are welcome! > > Emanuele > > [*]: Note that this is not exactly what it's written inside the source > package, where there a standard BSD-like license (see cluster.c), > whose text has more or less the same menaning of the Python license > but slightly different words (a question: to which version of Python > do they refer to? There was a non trivial evolution of that license > during last years...). Anyway we can say that Pycluster sources, > except ranlib* are BSD-like. > numpy makes use of a free (BSD-like license) C implementation of the Mersenne Twister called randomkit that can be used to generate integer and real uniform random numbers. A bit of coding can provide a random permutation generator that uses the randomkit functions. The latest version is available here (numpy 0.9.8 uses a slightly older version): http://www.jeannot.org/~js/code/randomkit-1.6.tgz L.G. From elcorto at gmx.net Mon Jun 12 20:03:11 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 13 Jun 2006 02:03:11 +0200 Subject: [SciPy-user] 'module' object has no attribute 'UnsignedInt8' Message-ID: <448E00BF.2060508@gmx.net> Hi After upgrading to the latest svn version I get In [4]: import scipy import misc -> failed: 'module' object has no attribute 'UnsignedInt8' Besides this, everthing seems to work. cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From oliphant at ee.byu.edu Mon Jun 12 20:10:50 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 12 Jun 2006 18:10:50 -0600 Subject: [SciPy-user] 'module' object has no attribute 'UnsignedInt8' In-Reply-To: <448E00BF.2060508@gmx.net> References: <448E00BF.2060508@gmx.net> Message-ID: <448E028A.80706@ee.byu.edu> Steve Schmerler wrote: >Hi > >After upgrading to the latest svn version I get > >In [4]: import scipy >import misc -> failed: 'module' object has no attribute 'UnsignedInt8' > >Besides this, everthing seems to work. > > > Please report any more little issues like this. I recently updated NumPy to place deprecated names in oldnumeric.py Many places in SciPy still use the deprecated names. I tried to get them all, but I may have failed. replacing numpy with numpy.oldnumeric should fix any problems that were introduced.... -Travis From ryanlists at gmail.com Mon Jun 12 20:45:59 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 12 Jun 2006 20:45:59 -0400 Subject: [SciPy-user] getting all vertices of simplex from optimize.fmin In-Reply-To: References: Message-ID: This is a possible re-send from this past weekend. Is there a way to make optimize.fmin output all vertices of the simplex? I would like to make a little animation of how the algorithm works. It seems like there are options to output the solution at each step, but that is only one point, instead of the whole simplex. Thanks, Ryan From cookedm at physics.mcmaster.ca Mon Jun 12 20:48:34 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 12 Jun 2006 20:48:34 -0400 Subject: [SciPy-user] 'module' object has no attribute 'UnsignedInt8' In-Reply-To: <448E028A.80706@ee.byu.edu> References: <448E00BF.2060508@gmx.net> <448E028A.80706@ee.byu.edu> Message-ID: <20060612204834.3e2d3762@arbutus.physics.mcmaster.ca> On Mon, 12 Jun 2006 18:10:50 -0600 Travis Oliphant wrote: > Steve Schmerler wrote: > > >Hi > > > >After upgrading to the latest svn version I get > > > >In [4]: import scipy > >import misc -> failed: 'module' object has no attribute 'UnsignedInt8' > > > >Besides this, everthing seems to work. > > > > > > > > Please report any more little issues like this. I recently updated > NumPy to place deprecated names in oldnumeric.py > > Many places in SciPy still use the deprecated names. I tried to get > them all, but I may have failed. [one-up-ed your fix, Travis] The misc/ stuff should use dtypes now instead of typecodes. One advantage of dtypes is that it's clear from reading the source that the 'F' and 'I' types in PIL are float32 and int32... -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From amano at eps.s.u-tokyo.ac.jp Tue Jun 13 01:50:11 2006 From: amano at eps.s.u-tokyo.ac.jp (Takanobu Amano) Date: Tue, 13 Jun 2006 14:50:11 +0900 (JST) Subject: [SciPy-user] Bug in scipy.special.ivp Message-ID: <20060613.145011.16966510.amano@eps.s.u-tokyo.ac.jp> Hello, I found a small bug in scipy.special.ivp (scipy/Lib/special/basic.py) this patch is for the latest svn snapshot (revision 1882). Takanobu Amano --- basic.py.orig 2006-06-13 14:29:05.766772544 +0900 +++ basic.py 2006-06-13 14:23:58.534420748 +0900 @@ -168,7 +168,7 @@ if n == 0: return iv(v,z) else: - return (ivp(v-1,z,n-1) - ivp(v+1,z,n-1))/2.0 + return (ivp(v-1,z,n-1) + ivp(v+1,z,n-1))/2.0 def h1vp(v,z,n=1): """Return the nth derivative of H1v(z) with respect to z. From steffen.loeck at gmx.de Tue Jun 13 02:57:11 2006 From: steffen.loeck at gmx.de (Steffen Loeck) Date: Tue, 13 Jun 2006 08:57:11 +0200 Subject: [SciPy-user] Possible bug with Hermite polynomials Message-ID: <200606130857.11207.steffen.loeck@gmx.de> Hello, there seems to be a problem using elements of an array as first argument in the Hermite functions: >> import scipy >> scipy.__version__ '0.5.0.1941' >> a = scipy.arange(10) >> import scipy.special >> scipy.special.hermite(1)(2.0) 4.0 >> scipy.special.hermite(a[1])(2.0) 0.0 The result using the element of array 'a' is 0.0, while 4.0 is correct. Is there any way to fix this problem? Regards, Steffen From karol.langner at kn.pl Tue Jun 13 03:08:08 2006 From: karol.langner at kn.pl (Karol Langner) Date: Tue, 13 Jun 2006 09:08:08 +0200 Subject: [SciPy-user] Possible bug with Hermite polynomials In-Reply-To: <200606130857.11207.steffen.loeck@gmx.de> References: <200606130857.11207.steffen.loeck@gmx.de> Message-ID: <200606130908.08250.karol.langner@kn.pl> On Tuesday 13 June 2006 08:57, Steffen Loeck wrote: > Hello, > > there seems to be a problem using elements of an array as first argument in > > the Hermite functions: > >> import scipy > >> scipy.__version__ > > '0.5.0.1941' > > >> a = scipy.arange(10) > >> > >> import scipy.special > >> > >> scipy.special.hermite(1)(2.0) > > 4.0 > > >> scipy.special.hermite(a[1])(2.0) > > 0.0 > > The result using the element of array 'a' is 0.0, while 4.0 is correct. > Is there any way to fix this problem? > > Regards, > Steffen > I don't know what causes this, but I've noticed that it works fine in version 0.3.2, for instance: >>> import scipy.special >>> scipy.__version__ '0.3.2' >>> scipy.special.hermite >>> scipy.special.hermite(1)(2.0) 4.0 >>> a = scipy.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> scipy.special.hermite(a[1])(2.0) 4.0 Cheers, Karol -- written by Karol Langner wto cze 13 09:06:18 CEST 2006 From nwagner at iam.uni-stuttgart.de Tue Jun 13 03:13:46 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 13 Jun 2006 09:13:46 +0200 Subject: [SciPy-user] Possible bug with Hermite polynomials In-Reply-To: <200606130908.08250.karol.langner@kn.pl> References: <200606130857.11207.steffen.loeck@gmx.de> <200606130908.08250.karol.langner@kn.pl> Message-ID: <448E65AA.1000506@iam.uni-stuttgart.de> Karol Langner wrote: > On Tuesday 13 June 2006 08:57, Steffen Loeck wrote: > >> Hello, >> >> there seems to be a problem using elements of an array as first argument in >> >> the Hermite functions: >> >>>> import scipy >>>> scipy.__version__ >>>> >> '0.5.0.1941' >> >> >>>> a = scipy.arange(10) >>>> >>>> import scipy.special >>>> >>>> scipy.special.hermite(1)(2.0) >>>> >> 4.0 >> >> >>>> scipy.special.hermite(a[1])(2.0) >>>> >> 0.0 >> >> The result using the element of array 'a' is 0.0, while 4.0 is correct. >> Is there any way to fix this problem? >> >> Regards, >> Steffen >> >> > > I don't know what causes this, but I've noticed that it works fine in version > 0.3.2, for instance: > >>>> import scipy.special >>>> scipy.__version__ >>>> > '0.3.2' > >>>> scipy.special.hermite >>>> > > >>>> scipy.special.hermite(1)(2.0) >>>> > 4.0 > >>>> a = scipy.arange(10) >>>> a >>>> > array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) > >>>> scipy.special.hermite(a[1])(2.0) >>>> > 4.0 > > Cheers, > Karol > > Fixed in latest svn >>> a = scipy.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> import scipy.special >>> scipy.special.hermite(1)(2.0) 4.0 >>> scipy.special.hermite(a[1])(2.0) 4.0 >>> scipy.__version__ '0.5.0.1951' >>> numpy.__version__ '0.9.9.2613' From schofield at ftw.at Tue Jun 13 05:59:21 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 13 Jun 2006 11:59:21 +0200 Subject: [SciPy-user] getting all vertices of simplex from optimize.fmin In-Reply-To: References: Message-ID: <448E8C79.5020504@ftw.at> Ryan Krauss wrote: > This is a possible re-send from this past weekend. > > Is there a way to make optimize.fmin output all vertices of the > simplex? I would like to make a little animation of how the algorithm > works. It seems like there are options to output the solution at each > step, but that is only one point, instead of the whole simplex. > I've just checked in support for callback functions in the optimize routines. So you can now get the values of the parameters at each iteration. I'm not sure about how the Nelder-Mead algorithm works, but perhaps you can get all vertices by modifying the line in the fmin function in optimize.py from callback(sim[0]) to callback(sim) ? -- Ed From nwagner at iam.uni-stuttgart.de Tue Jun 13 06:58:53 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 13 Jun 2006 12:58:53 +0200 Subject: [SciPy-user] fmin_ncg hangs with latest svn Message-ID: <448E9A6D.6080106@iam.uni-stuttgart.de> Hi all, Running the test with 0.5.0.1951 works fine python -i test_ncg.py yields Optimization terminated successfully. Current function value: 0.081014 Iterations: 7 Function evaluations: 8 Gradient evaluations: 469 Hessian evaluations: 0 fmin_ncg 8 Approximated smallest eigenvalue 0.081014052771 smallest eigenvalue by linalg.eigvals 0.081014052771 >>> import scipy >>> scipy.__version__ '0.5.0.1951' but hangs for '0.5.0.1954' Any idea ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: test_ncg.py Type: text/x-python Size: 819 bytes Desc: not available URL: From maik.troemel at maitro.net Tue Jun 13 07:32:04 2006 From: maik.troemel at maitro.net (=?UTF-8?B?TWFpayBUcsO2bWVs?=) Date: Tue, 13 Jun 2006 13:32:04 +0200 Subject: [SciPy-user] dtype In-Reply-To: <448D96CD.8060701@gmail.com> References: <448D9411.50400@maitro.net> <448D96CD.8060701@gmail.com> Message-ID: <448EA234.2060007@maitro.net> Hello, thanks for help. Now it works. Is there a posibility to make a byteswap while reading in with "fromstring". Or do I have to make it with "arrh.byteswap()"? Greetings Maik Robert Kern wrote: >Maik Tr?mel wrote: > > >>Hello list, >> >>i've got a question concerning datatypes. >>I'm importing a file filled with Data with the lenght of 2 bytes via >> >>arrh = numpy.fromstring(radFile, dtype = 'S2', count = 900*900) >> >>Now I want to convert the data via >> >>value = ord(arr[n][m][0]) * 256 + ord(arr[n][m][1]) >> >>But if the value in the array is NULL i get an Error: >> >>IndexError: string index out of range >> >> > >(A) In order for us to help you debug some code, you need to distill it down to >the smallest code that actually displays the error, and then provide us the >actual code. > >(B) In this case, there's a better way. See below. > > > >>So I think I have to choose another dtype. But I don't have any idea >>which one. >>Probably someone has an idea. >> >> > >arrh = numpy.fromstring(radFile, dtype=numpy.int16, count=900*900) > >You don't need to do any other conversions. You may have to pay attention to >byteswapping depending on the endianness of your platform, though. > > > From schofield at ftw.at Tue Jun 13 08:46:58 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 13 Jun 2006 14:46:58 +0200 Subject: [SciPy-user] fmin_ncg hangs with latest svn In-Reply-To: <448E9A6D.6080106@iam.uni-stuttgart.de> References: <448E9A6D.6080106@iam.uni-stuttgart.de> Message-ID: <448EB3C2.5000105@ftw.at> Nils Wagner wrote: > Hi all, > > Running the test ... > > hangs for '0.5.0.1954' > > Any idea ? > Oops, I broke it. Fixed now. Thanks for the nice test case! -- Ed From nwagner at iam.uni-stuttgart.de Tue Jun 13 08:59:55 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 13 Jun 2006 14:59:55 +0200 Subject: [SciPy-user] fmin_ncg hangs with latest svn In-Reply-To: <448EB3C2.5000105@ftw.at> References: <448E9A6D.6080106@iam.uni-stuttgart.de> <448EB3C2.5000105@ftw.at> Message-ID: <448EB6CB.3050509@iam.uni-stuttgart.de> Ed Schofield wrote: > Nils Wagner wrote: > >> Hi all, >> >> Running the test ... >> >> hangs for '0.5.0.1954' >> >> Any idea ? >> >> > > Oops, I broke it. Fixed now. Thanks for the nice test case! > > -- Ed > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Thank you for your prompt reply :-) Nils From nomo17k at gmail.com Tue Jun 13 18:41:16 2006 From: nomo17k at gmail.com (Taro Sato) Date: Tue, 13 Jun 2006 15:41:16 -0700 Subject: [SciPy-user] error importing scipy.weave Message-ID: I just checked out scipy from subversion (0.5.0.1962), and after building and installing following: http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 I get the following error (attached) importing scipy.weave. Am I missing anything? Thank you for your time, Taro ---BEGIN OUTPUT-------------------------------- In [1]: import scipy.weave --------------------------------------------------------------------------- exceptions.NameError Traceback (most recent call last) /home/taro/ /usr/lib/python2.3/site-packages/scipy/weave/__init__.py 7 8 try: ----> 9 from blitz_tools import blitz 10 except ImportError: 11 pass # scipy (core) wasn't available /usr/lib/python2.3/site-packages/scipy/weave/blitz_tools.py 6 import slice_handler 7 import size_check ----> 8 import converters 9 10 from ast_tools import * /usr/lib/python2.3/site-packages/scipy/weave/converters.py 27 try: 28 import standard_array_spec ---> 29 default.append(standard_array_spec.array_converter()) 30 except ImportError: 31 pass /usr/lib/python2.3/site-packages/scipy/weave/c_spec.py in __init__(self) 72 73 def __init__(self): ---> 74 self.init_info() 75 self._build_information = [self.generate_build_info()] 76 /usr/lib/python2.3/site-packages/scipy/weave/standard_array_spec.py in init_info(self) 139 self.return_type = 'PyArrayObject*' 140 self.to_c_return = '(PyArrayObject*) py_obj' --> 141 self.matching_types = [ArrayType] 142 self.headers = ['"numpy/arrayobject.h"', 143 '',''] NameError: global name 'ArrayType' is not defined ---END OUTPUT-------------------------------- From oliphant.travis at ieee.org Tue Jun 13 19:20:50 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 13 Jun 2006 17:20:50 -0600 Subject: [SciPy-user] Possible bug with Hermite polynomials In-Reply-To: <200606130857.11207.steffen.loeck@gmx.de> References: <200606130857.11207.steffen.loeck@gmx.de> Message-ID: <448F4852.7050207@ieee.org> Steffen Loeck wrote: > Hello, > > there seems to be a problem using elements of an array as first argument in > the Hermite functions: > > > >>> import scipy >>> scipy.__version__ >>> > '0.5.0.1941' > > It may be the NumPy version that is causing this. So, please let us know which NumPy you are using. This works for me with the latest versions. -Travis From oliphant.travis at ieee.org Tue Jun 13 19:22:26 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 13 Jun 2006 17:22:26 -0600 Subject: [SciPy-user] Bug in scipy.special.ivp In-Reply-To: <20060613.145011.16966510.amano@eps.s.u-tokyo.ac.jp> References: <20060613.145011.16966510.amano@eps.s.u-tokyo.ac.jp> Message-ID: <448F48B2.9080501@ieee.org> Takanobu Amano wrote: > Hello, > > I found a small bug in scipy.special.ivp (scipy/Lib/special/basic.py) > this patch is for the latest svn snapshot (revision 1882). > > Takanobu Amano > > --- basic.py.orig 2006-06-13 14:29:05.766772544 +0900 > +++ basic.py 2006-06-13 14:23:58.534420748 +0900 > @@ -168,7 +168,7 @@ > if n == 0: > return iv(v,z) > else: > - return (ivp(v-1,z,n-1) - ivp(v+1,z,n-1))/2.0 > + return (ivp(v-1,z,n-1) + ivp(v+1,z,n-1))/2.0 > > Nice catch. Thank you. The other modified Bessel Function also has an incorrect recurrence relation. Thank you for finding this. -Travis From cookedm at physics.mcmaster.ca Tue Jun 13 21:27:03 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 13 Jun 2006 21:27:03 -0400 Subject: [SciPy-user] error importing scipy.weave In-Reply-To: References: Message-ID: <20060613212703.6c7751d2@arbutus.physics.mcmaster.ca> On Tue, 13 Jun 2006 15:41:16 -0700 "Taro Sato" wrote: > I just checked out scipy from subversion (0.5.0.1962), and after > building and installing following: > > http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 > > I get the following error (attached) importing scipy.weave. > > Am I missing anything? > /usr/lib/python2.3/site-packages/scipy/weave/standard_array_spec.py in > init_info(self) > 139 self.return_type = 'PyArrayObject*' > 140 self.to_c_return = '(PyArrayObject*) py_obj' > --> 141 self.matching_types = [ArrayType] > 142 self.headers = ['"numpy/arrayobject.h"', > 143 '',''] > > NameError: global name 'ArrayType' is not defined We're trying to get rid of the old Numeric names (like ArrayType), and not all the scipy modules are converted yet. I've updated weave to remove all uses of those names, so try it now. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From steffen.loeck at gmx.de Wed Jun 14 03:00:29 2006 From: steffen.loeck at gmx.de (Steffen Loeck) Date: Wed, 14 Jun 2006 09:00:29 +0200 Subject: [SciPy-user] Possible bug with Hermite polynomials In-Reply-To: <448F4852.7050207@ieee.org> References: <200606130857.11207.steffen.loeck@gmx.de> <448F4852.7050207@ieee.org> Message-ID: <200606140900.29124.steffen.loeck@gmx.de> Travis Oliphant wrote: > It may be the NumPy version that is causing this. So, please let us > know which NumPy you are using. This works for me with the latest > versions. I used NumPy version 0.9.8. With the latest version it works fine for me as well. Steffen From victor.martinez at uib.es Wed Jun 14 10:47:39 2006 From: victor.martinez at uib.es (=?ISO-8859-1?Q?V=EDctor_Mart=EDnez-Moll?=) Date: Wed, 14 Jun 2006 16:47:39 +0200 Subject: [SciPy-user] Problems with odeint In-Reply-To: <448939BB.6030103@uib.es> References: <448939BB.6030103@uib.es> Message-ID: <4490218B.2050704@uib.es> Hi again, Sorry for bothering again but as my message arrived just before the list problem and I've received no answer I'm wondering if maybe some message was lost. So, again, any suggestions on how to solve this simple 2nd order differential equation with Scipy? Or, what the hell I did wrong to get such strangre results? Thanks in advance and best regards. Victor PS: If you think this is not the right forum or you think I could find the answer to my question reading any particular documentation please let me know too. I've read some of the documentation I found online but maybe I've missed something. En/na V?ctor Mart?nez-Moll ha escrit: > Hi all, > > I've been a SciLab user for some time and I'm evaluating SciPy as a > development tool. > > The first thing I tried is to solve a simple second order diferential > equation using odeint(). The problem is that depending on the function I > want to integrate I get nice results, but for most of them I get simply > nothing or nonsense answers. Is not a problem of the function having a > strange behaviour or having singularity points. For example if I try to > solve: > d2y/dt2 = 1-sin(y) > either I get nothing or wrong solutions (the best thing I got was > setting:hmin=0.01,atol=.001), while If I do about the same procedure in > SciLab I get a nice and smooth set of curves. The strangest thing is > that if I use exactly the same procedure to solve: > d2y/dt2 = 1-y > then I get the right solution, which seems to indicate that I'm doing > the right thing (although of course I know I'm not because I do not > belive that odeint is not able to solve such a silly thing). > > I've only checked it with the last enthon distribution I found: > enthon-python2.4-1.0.0.beta2.exe > > The simple procedure I wrote in Python and its equivalent in SciLab that > does the right thing in are: > > ###################################### > ## The Python one ## > ###################################### > > from scipy import * > from matplotlib.pylab import * > > def dwdt(w,t): > return [w[1],1.0-sin(w[0])] > > t=arange(0.0,2.0*pi,.01) > > ww = integrate.odeint(dwdt,[0.0,0.0],t,hmin=0.01,atol=.001) > > y = ww[:,0] > dy =ww[:,1] > ddy = 1.0-sin(ww[:,0]) > > plot(t,y,label='y') > plot(t,dy,label='dy') > plot(t,ddy,label='ddy') > legend() > show() > > ###################################### > ## The SciLab one ## > ###################################### > > function result = dwdt(t,w) > result(1) = w(2) > result(2) = 1.0-sin(w(1)) > endfunction > > t = 0.0:0.01:2.0*%pi; > > ww = ode([0.0;0.0],0.0,t,dwdt); > > y = ww(1,:); > dy = ww(2,:); > ddy=1.0-sin(ww(1,:)); > > xset("window", 0) > xbasc() > plot2d(t,[y;dy;ddy]',leg="y at dy@ddy") > > ######################################## > > Any ideas or suggestions will be wellcome. > > -- V?ctor Mart?nez Moll | Universitat de les Illes Balears Departament de F?sica | Edifici Mateu Orfila ?rea d'Enginyeria Mec?nica | E-07122, Palma de Mallorca, SPAIN e-mail: victor.martinez at uib.es | Tel:34-971171374 Fax:34-971173426 From millman at berkeley.edu Wed Jun 14 12:01:58 2006 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 14 Jun 2006 18:01:58 +0200 Subject: [SciPy-user] Neuroimaging in Python programmer position Message-ID: Hello, I am looking to fill a one-year programming position at UC Berkeley's Neuroscience Institute. You find the job posting by searching for job #004644 here: http://jobs.berkeley.edu/ The job basically involves serving as the lead architect for the NeuroImaging in Python (nipy) project: http://neuroimaging.scipy.org/ We are in the early stages of the project but already have a large codebase written by Jonathan Taylor, a professor of statistics at Stanford. You can browse the svn repository here: http://projects.scipy.org/neuroimaging/ni/browser/ni/trunk Here is the API documentation: http://neuroimaging.scipy.org/api/ The job will involve interacting with a international team of scientists and programmers who are committed to the project, and we expect the code to be used increasingly widely in the rapidly expanding field of neuroimaging. We make heavy use of scipy and numpy, and have already started to contribute to aspects of scipy (e.g. http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/sandbox/models) I have pasted the job description at the end of my email. Feel free to contact me if you have any questions or know of anyone I should contact. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ----------------------------------------------------------------------------- Posting Title: Programmer/Analyst IV-Ucb Requisition: 004644 Department: Helen Wills Neuroscience Inst Location: Main Campus-Berkeley Salary: Annual salary range is $68,100 to $123,800 Note: Although full salary scale is listed, most offers will not exceed midpoint of the salary range. First Review Date: 06/09/2006 This requisition will remain open until filled. Job Description: As team leader of the BIC Neuroinformatics Support (BNS), provide day-to-day leadership as well as long-term planning. The Institute administers the Brain Imaging Center (BIC), which houses a high-resolution Magnetic Resonance Imaging (MRI) scanner, which is used to gather functional MRI (fMRI) data from human subjects. As a senior-level Programmer/Analyst (PA), assume an integral role in the design and development of the Neuroimaging Tools in Python (NiPy) including the work flow, data maintenance, and data processing system at the BIC. Work involves software development and integration of 3rd party tools (e.g., SPM) into the NiPy environment. Assume a key role in the continuing transition of our dataflow systems into a scalable and flexible architecture based on Python and web technologies using NiPy. Represent the unit and the university in collaborative development of analysis software with other universities and research institutes. Keep up with emerging technologies, evaluate software tools, and develop standards and processes for effective implementation, deployment, and maintenance of the architecture. Work independently and as part of a team, reporting directly to the leader of NICE. Responsibilities: 60% Maintain the NiPy neuroimaging analysis code, including incorporating new methods as they are developed. Modify existing analysis software including but not limited to porting programs to work in different operating environments, making existing programs more user-friendly, and selecting different parts of existing programs and joining them together to perform a new task. Develop a scalable, modular pipeline framework to incorporate these various analysis components. Deploy stable releases of this software suite according to established BNS policy. 20% Act as leader of the BNS, including both 1) training other team members and planning and assigning work assignments for other team members and as well as 2) creating and documenting BNS best practices and preferred standards. Direct the design and development of BIC subject and patient databases, including schema design, interface development, and integration with other tools. Conceive, develop and implement critical IT infrastructure including user and configuration management systems. Serve as a resource for BIC programmers and NiPy contributors. 15% Provide a front-line interface to end-users, accepting trouble reports and responding as appropriate, including contacting vendors and developers to report and resolve trouble. Help users with analysis software and data processing. Conduct user training and document BIC informatics' pipeline. Perform other duties as requested. 5% Attend workshops, seminars, and training sessions to maintain and improve professional skills. Requirements & Qualifications: Expert knowledge of Python. Four years experience with Linux (Mandrake and/or Redhat). Familiarity with Windows and MacOS X. Intermediate knowledge of C, C++, Matlab, and Java. Understanding of GNU programming tools including GCC, make, automake, and autoconf. Basic knowledge of signal processing and linear algebra. Strong interest and background in cognitive science and/or neuroscience. Experience with medical image processing. Understanding of MRI analysis. This position has been designated as sensitive and may require a Criminal Background Check. We reserve the right to make employment contingent upon successful completion of a Criminal Background Check. From vinicius.lobosco at paperplat.com Wed Jun 14 14:52:35 2006 From: vinicius.lobosco at paperplat.com (Vinicius Lobosco) Date: Wed, 14 Jun 2006 20:52:35 +0200 Subject: [SciPy-user] Problems with odeint In-Reply-To: <4490218B.2050704@uib.es> References: <448939BB.6030103@uib.es> <4490218B.2050704@uib.es> Message-ID: <1e2b8b840606141152u644e8191ndef562e8d51a7d1d@mail.gmail.com> Hi Victor! I just ran your code on my machine to take a closer look and I goot some nice results, which at first glance seem to be right. Please, check the attached figure. /Vinicius On 6/14/06, V?ctor Mart?nez-Moll wrote: > > Hi again, > Sorry for bothering again but as my message arrived just before the list > problem and I've received no answer I'm wondering if maybe some message > was lost. > > So, again, any suggestions on how to solve this simple 2nd order > differential equation with Scipy? Or, what the hell I did wrong to get > such strangre results? > > Thanks in advance and best regards. > > Victor > > PS: If you think this is not the right forum or you think I could find > the answer to my question reading any particular documentation please > let me know too. I've read some of the documentation I found online but > maybe I've missed something. > > En/na V?ctor Mart?nez-Moll ha escrit: > > Hi all, > > > > I've been a SciLab user for some time and I'm evaluating SciPy as a > > development tool. > > > > The first thing I tried is to solve a simple second order diferential > > equation using odeint(). The problem is that depending on the function I > > want to integrate I get nice results, but for most of them I get simply > > nothing or nonsense answers. Is not a problem of the function having a > > strange behaviour or having singularity points. For example if I try to > > solve: > > d2y/dt2 = 1-sin(y) > > either I get nothing or wrong solutions (the best thing I got was > > setting:hmin=0.01,atol=.001), while If I do about the same procedure in > > SciLab I get a nice and smooth set of curves. The strangest thing is > > that if I use exactly the same procedure to solve: > > d2y/dt2 = 1-y > > then I get the right solution, which seems to indicate that I'm doing > > the right thing (although of course I know I'm not because I do not > > belive that odeint is not able to solve such a silly thing). > > > > I've only checked it with the last enthon distribution I found: > > enthon-python2.4-1.0.0.beta2.exe > > > > The simple procedure I wrote in Python and its equivalent in SciLab that > > does the right thing in are: > > > > ###################################### > > ## The Python one ## > > ###################################### > > > > from scipy import * > > from matplotlib.pylab import * > > > > def dwdt(w,t): > > return [w[1],1.0-sin(w[0])] > > > > t=arange(0.0,2.0*pi,.01) > > > > ww = integrate.odeint(dwdt,[0.0,0.0],t,hmin=0.01,atol=.001) > > > > y = ww[:,0] > > dy =ww[:,1] > > ddy = 1.0-sin(ww[:,0]) > > > > plot(t,y,label='y') > > plot(t,dy,label='dy') > > plot(t,ddy,label='ddy') > > legend() > > show() > > > > ###################################### > > ## The SciLab one ## > > ###################################### > > > > function result = dwdt(t,w) > > result(1) = w(2) > > result(2) = 1.0-sin(w(1)) > > endfunction > > > > t = 0.0:0.01:2.0*%pi; > > > > ww = ode([0.0;0.0],0.0,t,dwdt); > > > > y = ww(1,:); > > dy = ww(2,:); > > ddy=1.0-sin(ww(1,:)); > > > > xset("window", 0) > > xbasc() > > plot2d(t,[y;dy;ddy]',leg="y at dy@ddy") > > > > ######################################## > > > > Any ideas or suggestions will be wellcome. > > > > > > -- > V?ctor Mart?nez Moll | Universitat de les Illes Balears > Departament de F?sica | Edifici Mateu Orfila > ?rea d'Enginyeria Mec?nica | E-07122, Palma de Mallorca, SPAIN > e-mail: victor.martinez at uib.es | Tel:34-971171374 Fax:34-971173426 > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -- --------------------------------- Vinicius Lobosco, PhD www.paperplat.com +46 8 612 7803 +46 73 925 8476 Bj?rnn?sv?gen 21 SE-113 47 Stockholm, Sweden -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinicius.lobosco at paperplat.com Wed Jun 14 14:53:25 2006 From: vinicius.lobosco at paperplat.com (Vinicius Lobosco) Date: Wed, 14 Jun 2006 20:53:25 +0200 Subject: [SciPy-user] Problems with odeint In-Reply-To: <1e2b8b840606141152u644e8191ndef562e8d51a7d1d@mail.gmail.com> References: <448939BB.6030103@uib.es> <4490218B.2050704@uib.es> <1e2b8b840606141152u644e8191ndef562e8d51a7d1d@mail.gmail.com> Message-ID: <1e2b8b840606141153w452dd0adkf8b2f0b84d4dbb13@mail.gmail.com> Typical... And now the figure! /vinicius On 6/14/06, Vinicius Lobosco wrote: > > Hi Victor! > > I just ran your code on my machine to take a closer look and I goot some > nice results, which at first glance seem to be right. Please, check the > attached figure. > > /Vinicius > > > On 6/14/06, V?ctor Mart?nez-Moll wrote: > > > > Hi again, > > Sorry for bothering again but as my message arrived just before the list > > problem and I've received no answer I'm wondering if maybe some message > > was lost. > > > > So, again, any suggestions on how to solve this simple 2nd order > > differential equation with Scipy? Or, what the hell I did wrong to get > > such strangre results? > > > > Thanks in advance and best regards. > > > > Victor > > > > PS: If you think this is not the right forum or you think I could find > > the answer to my question reading any particular documentation please > > let me know too. I've read some of the documentation I found online but > > maybe I've missed something. > > > > En/na V?ctor Mart?nez-Moll ha escrit: > > > Hi all, > > > > > > I've been a SciLab user for some time and I'm evaluating SciPy as a > > > development tool. > > > > > > The first thing I tried is to solve a simple second order diferential > > > equation using odeint(). The problem is that depending on the function > > I > > > want to integrate I get nice results, but for most of them I get > > simply > > > nothing or nonsense answers. Is not a problem of the function having a > > > strange behaviour or having singularity points. For example if I try > > to > > > solve: > > > d2y/dt2 = 1-sin(y) > > > either I get nothing or wrong solutions (the best thing I got was > > > setting:hmin=0.01,atol=.001), while If I do about the same procedure > > in > > > SciLab I get a nice and smooth set of curves. The strangest thing is > > > that if I use exactly the same procedure to solve: > > > d2y/dt2 = 1-y > > > then I get the right solution, which seems to indicate that I'm doing > > > the right thing (although of course I know I'm not because I do not > > > belive that odeint is not able to solve such a silly thing). > > > > > > I've only checked it with the last enthon distribution I found: > > > enthon-python2.4-1.0.0.beta2.exe > > > > > > The simple procedure I wrote in Python and its equivalent in SciLab > > that > > > does the right thing in are: > > > > > > ###################################### > > > ## The Python one ## > > > ###################################### > > > > > > from scipy import * > > > from matplotlib.pylab import * > > > > > > def dwdt(w,t): > > > return [w[1],1.0-sin(w[0])] > > > > > > t=arange(0.0,2.0*pi,.01) > > > > > > ww = integrate.odeint(dwdt,[0.0,0.0],t,hmin=0.01,atol=.001) > > > > > > y = ww[:,0] > > > dy =ww[:,1] > > > ddy = 1.0-sin(ww[:,0]) > > > > > > plot(t,y,label='y') > > > plot(t,dy,label='dy') > > > plot(t,ddy,label='ddy') > > > legend() > > > show() > > > > > > ###################################### > > > ## The SciLab one ## > > > ###################################### > > > > > > function result = dwdt(t,w) > > > result(1) = w(2) > > > result(2) = 1.0-sin(w(1)) > > > endfunction > > > > > > t = 0.0:0.01:2.0*%pi; > > > > > > ww = ode([0.0;0.0],0.0,t,dwdt); > > > > > > y = ww(1,:); > > > dy = ww(2,:); > > > ddy=1.0-sin(ww(1,:)); > > > > > > xset("window", 0) > > > xbasc() > > > plot2d(t,[y;dy;ddy]',leg="y at dy@ddy") > > > > > > ######################################## > > > > > > Any ideas or suggestions will be wellcome. > > > > > > > > > > -- > > V?ctor Mart?nez Moll | Universitat de les Illes Balears > > Departament de F?sica | Edifici Mateu Orfila > > ?rea d'Enginyeria Mec?nica | E-07122, Palma de Mallorca, SPAIN > > e-mail: victor.martinez at uib.es | Tel:34-971171374 Fax:34-971173426 > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > > > -- > --------------------------------- > Vinicius Lobosco, PhD > www.paperplat.com > > +46 8 612 7803 > +46 73 925 8476 > > Bj?rnn?sv?gen 21 > SE-113 47 Stockholm, Sweden > -- --------------------------------- Vinicius Lobosco, PhD www.paperplat.com +46 8 612 7803 +46 73 925 8476 Bj?rnn?sv?gen 21 SE-113 47 Stockholm, Sweden -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 44324 bytes Desc: not available URL: From elcorto at gmx.net Wed Jun 14 19:29:54 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Thu, 15 Jun 2006 01:29:54 +0200 Subject: [SciPy-user] optimize.brent* full_output=1 ? Message-ID: <44909BF2.5010407@gmx.net> Hi I found that some docstrings in optimize don't say which additional outputs the user gets if full_output = 1. Especially brentq, brenth, bisect and ridder are called via _zeros. and so I can't see what it's doing. docstrings not OK: bisect, brentq, brenth, golden, ridder In [43]: scipy.__version__ Out[43]: '0.5.0.1949' BTW, I have no experience in wrapping C or Fortran and stuff, so what are the .so files (e.g. _zeros.so)? cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From cookedm at physics.mcmaster.ca Thu Jun 15 01:52:50 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 15 Jun 2006 01:52:50 -0400 Subject: [SciPy-user] optimize.brent* full_output=1 ? In-Reply-To: <44909BF2.5010407@gmx.net> References: <44909BF2.5010407@gmx.net> Message-ID: <20060615015250.0a4acfab@arbutus.physics.mcmaster.ca> On Thu, 15 Jun 2006 01:29:54 +0200 Steve Schmerler wrote: > Hi > > I found that some docstrings in optimize don't say which additional > outputs the user gets if full_output = 1. Especially brentq, brenth, > bisect and ridder are called via _zeros. and so I can't see what > it's doing. > > docstrings not OK: bisect, brentq, brenth, golden, ridder > > In [43]: scipy.__version__ > Out[43]: '0.5.0.1949' I've updated the routines so that full_output is actually sane. With it True, these routines return (root, r), where r is a RootResults object (defined in optimize.zeros). r contains all the info you may want as attributes: r.iterations, r.function_calls, r.converged (True if the routine converged), and r.flag (more detail for convergence failures). [Before, they would return root, iterations, function_calls, flag, which isn't *quite* as bad as the minimization routines, which return a different number of results depending on the routine!] > BTW, I have no experience in wrapping C or Fortran and stuff, so what > are the .so files (e.g. _zeros.so)? C extensions. They're exactly like the shared libraries you'd find in /usr/lib, but Python loads them as requested at runtime. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Thu Jun 15 04:07:08 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 15 Jun 2006 04:07:08 -0400 Subject: [SciPy-user] Bug in scipy.special.ivp In-Reply-To: <448F48B2.9080501@ieee.org> References: <20060613.145011.16966510.amano@eps.s.u-tokyo.ac.jp> <448F48B2.9080501@ieee.org> Message-ID: <20060615040708.437d46f2@arbutus.physics.mcmaster.ca> On Tue, 13 Jun 2006 17:22:26 -0600 Travis Oliphant wrote: > Takanobu Amano wrote: > > Hello, > > > > I found a small bug in scipy.special.ivp (scipy/Lib/special/basic.py) > > this patch is for the latest svn snapshot (revision 1882). > > > > Takanobu Amano > > > > --- basic.py.orig 2006-06-13 14:29:05.766772544 +0900 > > +++ basic.py 2006-06-13 14:23:58.534420748 +0900 > > @@ -168,7 +168,7 @@ > > if n == 0: > > return iv(v,z) > > else: > > - return (ivp(v-1,z,n-1) - ivp(v+1,z,n-1))/2.0 > > + return (ivp(v-1,z,n-1) + ivp(v+1,z,n-1))/2.0 > > > > > Nice catch. Thank you. The other modified Bessel Function also has an > incorrect recurrence relation. Thank you for finding this. I've replaced all the Bessel function derivatives with with a non-recursive version. This lead to finding that the Bessel functions (jv, yv, iv, kv, hankel1, and hankel2) didn't handle negative orders correctly, which I've fixed. Plus more test cases (mostly for kvp, b/c that's the weirdest one). [The recurrence is probably fine, but I wouldn't have found the errors otherwise :D] -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fperez.net at gmail.com Thu Jun 15 05:07:23 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 15 Jun 2006 03:07:23 -0600 Subject: [SciPy-user] Problems with odeint In-Reply-To: <4490218B.2050704@uib.es> References: <448939BB.6030103@uib.es> <4490218B.2050704@uib.es> Message-ID: On 6/9/06, V?ctor Mart?nez-Moll wrote: > Hi all, > > I've been a SciLab user for some time and I'm evaluating SciPy as a > development tool. > > The first thing I tried is to solve a simple second order diferential > equation using odeint(). The problem is that depending on the function I > want to integrate I get nice results, but for most of them I get simply > nothing or nonsense answers. Is not a problem of the function having a > strange behaviour or having singularity points. For example if I try to > solve: > d2y/dt2 = 1-sin(y) > either I get nothing or wrong solutions (the best thing I got was > setting:hmin=0.01,atol=.001), while If I do about the same procedure in > SciLab I get a nice and smooth set of curves. The strangest thing is > that if I use exactly the same procedure to solve: > d2y/dt2 = 1-y > then I get the right solution, which seems to indicate that I'm doing > the right thing (although of course I know I'm not because I do not > belive that odeint is not able to solve such a silly thing). > > I've only checked it with the last enthon distribution I found: > enthon-python2.4-1.0.0.beta2.exe > > The simple procedure I wrote in Python and its equivalent in SciLab that > does the right thing in are: I'm sorry, but I get basically the same results with both. I'm attaching a slightly modified version of the python script you wrote, which prints some debug information. On my system, this information reads: In [7]: run odebug numerix flag : Numeric mpl version: 0.87.3 scipy version: 0.5.0.1940 I've attached a png of the resulting plot. I've never used scilab before, but just brute-pasting your example into a scilab window and exporting the resulting plot gave me the attached file. >From what I can see, both results look more or less consistent (I haven't done numerical accuracy checks, I'm just looking at the figures). Cheers, f -------------- next part -------------- A non-text attachment was scrubbed... Name: odebug.py Type: text/x-python Size: 529 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: odeplot_scipy.png Type: image/png Size: 35304 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: odeplot_scilab.png Type: image/png Size: 3438 bytes Desc: not available URL: From victor.martinez at uib.es Thu Jun 15 05:21:16 2006 From: victor.martinez at uib.es (=?ISO-8859-1?Q?V=EDctor_Mart=EDnez-Moll?=) Date: Thu, 15 Jun 2006 11:21:16 +0200 Subject: [SciPy-user] Problems with odeint In-Reply-To: <1e2b8b840606141153w452dd0adkf8b2f0b84d4dbb13@mail.gmail.com> References: <448939BB.6030103@uib.es> <4490218B.2050704@uib.es> <1e2b8b840606141152u644e8191ndef562e8d51a7d1d@mail.gmail.com> <1e2b8b840606141153w452dd0adkf8b2f0b84d4dbb13@mail.gmail.com> Message-ID: <4491268C.7040007@uib.es> Thanks Vinicius! Nice results! The figure looks right and it is almost identical to what I get with Scilab. But unfortunately when I run it on my computer the only thing I get is: Repeated error test failures (internal error). Run with full_output = 1 to get quantitative information. And if I remove hmin=0.01,atol=.001 then I simply get no error message but the same empty screen (in fact is not empty, there is a horizontal line at y =0 for dy and another one at y = 1 for ddy). Did you change any parameter? If not it looks like a bug in my odeint version. I use the one that comes with: enthon-python2.4-1.0.0.beta2.exe. What version did you run? En/na Vinicius Lobosco ha escrit: > Typical... And now the figure! > /vinicius > > On 6/14/06, *Vinicius Lobosco* > wrote: > > Hi Victor! > > I just ran your code on my machine to take a closer look and I goot > some nice results, which at first glance seem to be right. Please, > check the attached figure. > > /Vinicius > > > -- > --------------------------------- > Vinicius Lobosco, PhD > www.paperplat.com > > +46 8 612 7803 > +46 73 925 8476 > > Bj?rnn?sv?gen 21 > SE-113 47 Stockholm, Sweden > > > > > -- > --------------------------------- > Vinicius Lobosco, PhD > www.paperplat.com > > +46 8 612 7803 > +46 73 925 8476 > > Bj?rnn?sv?gen 21 > SE-113 47 Stockholm, Sweden > ------------------------------------------------------------------------ > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- V?ctor Mart?nez Moll | Universitat de les Illes Balears Departament de F?sica | Edifici Mateu Orfila ?rea d'Enginyeria Mec?nica | E-07122, Palma de Mallorca, SPAIN e-mail: victor.martinez at uib.es | Tel:34-971171374 Fax:34-971173426 From victor.martinez at uib.es Thu Jun 15 05:41:28 2006 From: victor.martinez at uib.es (=?ISO-8859-1?Q?V=EDctor_Mart=EDnez-Moll?=) Date: Thu, 15 Jun 2006 11:41:28 +0200 Subject: [SciPy-user] Problems with odeint In-Reply-To: References: <448939BB.6030103@uib.es> <4490218B.2050704@uib.es> Message-ID: <44912B48.9070802@uib.es> Thanks Fernando! Using your script I get the right curves too. I had only to change the line: import pylab as P by import matplotlib.pylab as P On the other hand the debugging lines you added showed: numerix flag : Numeric mpl version: 0.87.2 scipy version: 0.4.9.1893 The only thing that worries me now is to know what was wrong with my script (if anything). I'll try to discover when I have some time, so any suggestions will be welcome. Cheers, V?ctor En/na Fernando Perez ha escrit: > On 6/9/06, V?ctor Mart?nez-Moll wrote: >> Hi all, >> >> I've been a SciLab user for some time and I'm evaluating SciPy as a >> development tool. >> >> The first thing I tried is to solve a simple second order diferential >> equation using odeint(). The problem is that depending on the function I >> want to integrate I get nice results, but for most of them I get simply >> nothing or nonsense answers. Is not a problem of the function having a >> strange behaviour or having singularity points. For example if I try to >> solve: >> d2y/dt2 = 1-sin(y) >> either I get nothing or wrong solutions (the best thing I got was >> setting:hmin=0.01,atol=.001), while If I do about the same procedure in >> SciLab I get a nice and smooth set of curves. The strangest thing is >> that if I use exactly the same procedure to solve: >> d2y/dt2 = 1-y >> then I get the right solution, which seems to indicate that I'm doing >> the right thing (although of course I know I'm not because I do not >> belive that odeint is not able to solve such a silly thing). >> >> I've only checked it with the last enthon distribution I found: >> enthon-python2.4-1.0.0.beta2.exe >> >> The simple procedure I wrote in Python and its equivalent in SciLab that >> does the right thing in are: > > I'm sorry, but I get basically the same results with both. I'm > attaching a slightly modified version of the python script you wrote, > which prints some debug information. On my system, this information > reads: > > In [7]: run odebug > numerix flag : Numeric > mpl version: 0.87.3 > scipy version: 0.5.0.1940 > > I've attached a png of the resulting plot. > > I've never used scilab before, but just brute-pasting your example > into a scilab window and exporting the resulting plot gave me the > attached file. > > >> From what I can see, both results look more or less consistent (I > haven't done numerical accuracy checks, I'm just looking at the > figures). > > Cheers, > > f > > > ------------------------------------------------------------------------ > > import math > import matplotlib as M > import pylab as P > import scipy as S > import scipy.integrate > > # debug info > print 'numerix flag :',P.rcParams['numerix'] > print 'mpl version:',M.__version__ > print 'scipy version:',S.__version__ > > def dwdt(w,t): > return [w[1],1.0-math.sin(w[0])] > > t = S.arange(0.0,2.0*S.pi,.01) > > ww = S.integrate.odeint(dwdt,[0.0,0.0],t,hmin=0.01,atol=.001) > > y = ww[:,0] > dy =ww[:,1] > ddy = 1.0-S.sin(ww[:,0]) > > P.figure() > P.plot(t,y,label='y') > P.plot(t,dy,label='dy') > P.plot(t,ddy,label='ddy') > P.legend() > P.show() > > > ------------------------------------------------------------------------ > > > ------------------------------------------------------------------------ > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- V?ctor Mart?nez Moll | Universitat de les Illes Balears Departament de F?sica | Edifici Mateu Orfila ?rea d'Enginyeria Mec?nica | E-07122, Palma de Mallorca, SPAIN e-mail: victor.martinez at uib.es | Tel:34-971171374 Fax:34-971173426 From vinicius.lobosco at gmail.com Thu Jun 15 05:55:19 2006 From: vinicius.lobosco at gmail.com (Vinicius Lobosco) Date: Thu, 15 Jun 2006 11:55:19 +0200 Subject: [SciPy-user] Problems with odeint In-Reply-To: <44912B48.9070802@uib.es> References: <448939BB.6030103@uib.es> <44912B48.9070802@uib.es> Message-ID: <200606151155.19859.vinicius.lobosco@gmail.com> So may have had a name conflict with another package. Sounds as a good strategy to import as something. Something I usually dont... On Thursday 15 June 2006 11.41, V?ctor Mart?nez-Moll wrote: > Thanks Fernando! > > Using your script I get the right curves too. I had only to change the > line: import pylab as P > by > import matplotlib.pylab as P > > On the other hand the debugging lines you added showed: > > numerix flag : Numeric > mpl version: 0.87.2 > scipy version: 0.4.9.1893 > > The only thing that worries me now is to know what was wrong with my > script (if anything). I'll try to discover when I have some time, so any > suggestions will be welcome. > > Cheers, > > V?ctor > > En/na Fernando Perez ha escrit: > > On 6/9/06, V?ctor Mart?nez-Moll wrote: > >> Hi all, > >> > >> I've been a SciLab user for some time and I'm evaluating SciPy as a > >> development tool. > >> > >> The first thing I tried is to solve a simple second order diferential > >> equation using odeint(). The problem is that depending on the function I > >> want to integrate I get nice results, but for most of them I get simply > >> nothing or nonsense answers. Is not a problem of the function having a > >> strange behaviour or having singularity points. For example if I try to > >> solve: > >> d2y/dt2 = 1-sin(y) > >> either I get nothing or wrong solutions (the best thing I got was > >> setting:hmin=0.01,atol=.001), while If I do about the same procedure in > >> SciLab I get a nice and smooth set of curves. The strangest thing is > >> that if I use exactly the same procedure to solve: > >> d2y/dt2 = 1-y > >> then I get the right solution, which seems to indicate that I'm doing > >> the right thing (although of course I know I'm not because I do not > >> belive that odeint is not able to solve such a silly thing). > >> > >> I've only checked it with the last enthon distribution I found: > >> enthon-python2.4-1.0.0.beta2.exe > >> > >> The simple procedure I wrote in Python and its equivalent in SciLab that > >> does the right thing in are: > > > > I'm sorry, but I get basically the same results with both. I'm > > attaching a slightly modified version of the python script you wrote, > > which prints some debug information. On my system, this information > > reads: > > > > In [7]: run odebug > > numerix flag : Numeric > > mpl version: 0.87.3 > > scipy version: 0.5.0.1940 > > > > I've attached a png of the resulting plot. > > > > I've never used scilab before, but just brute-pasting your example > > into a scilab window and exporting the resulting plot gave me the > > attached file. > > > >> From what I can see, both results look more or less consistent (I > > > > haven't done numerical accuracy checks, I'm just looking at the > > figures). > > > > Cheers, > > > > f > > > > > > ------------------------------------------------------------------------ > > > > import math > > import matplotlib as M > > import pylab as P > > import scipy as S > > import scipy.integrate > > > > # debug info > > print 'numerix flag :',P.rcParams['numerix'] > > print 'mpl version:',M.__version__ > > print 'scipy version:',S.__version__ > > > > def dwdt(w,t): > > return [w[1],1.0-math.sin(w[0])] > > > > t = S.arange(0.0,2.0*S.pi,.01) > > > > ww = S.integrate.odeint(dwdt,[0.0,0.0],t,hmin=0.01,atol=.001) > > > > y = ww[:,0] > > dy =ww[:,1] > > ddy = 1.0-S.sin(ww[:,0]) > > > > P.figure() > > P.plot(t,y,label='y') > > P.plot(t,dy,label='dy') > > P.plot(t,ddy,label='ddy') > > P.legend() > > P.show() > > > > > > ------------------------------------------------------------------------ > > > > > > ------------------------------------------------------------------------ > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user From jrk9185 at rit.edu Thu Jun 15 10:57:27 2006 From: jrk9185 at rit.edu (Joseph Kardamis (RIT Student)) Date: Thu, 15 Jun 2006 10:57:27 -0400 Subject: [SciPy-user] Low pass filter design and usage Message-ID: <7B4586F966AF25489F9D6CE0C5F97262499D92@svits27.main.ad.rit.edu> I need help in creating a low pass filter; I have very little experience in digital signal processing, if any. The context of the lowpass filter is as such: an audio signal is split into an array of subband signals attuned to particular frequencies. These subband signals are then processed with full-wave compression, half-wave rectification, and lowpass filtering. It seems that the scipy.signal module should have what I need to perform lowpass filtering, but my lack of knowledge in DSP is making it difficult to know what I'm doing or where even to start. For each band, I know the center frequency of the band, and the lowpass filtering occurs at twice the center frequency. Any help would be greatly appreciated. Cheers, -Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From alford at wuphys.wustl.edu Thu Jun 15 22:54:48 2006 From: alford at wuphys.wustl.edu (Mark Alford) Date: Thu, 15 Jun 2006 21:54:48 -0500 (CDT) Subject: [SciPy-user] SciPy web docs: add FC5 install instructions Message-ID: I just installed SciPy on Fedora Core 5. After some work, I found out that it was easy: yum install numpy yum install lapack-devel blas-devel setenv ATLAS /usr/lib/atlas # Did this achieve anything? Download and unzip scipy-0.4.8 to /usr/local/src/ cd /usr/local/src/scipy-0.4.8 # 0.4.8 is compatible with FC5 numpy python setup.py install >& install.log Shouldn't this info be on the SciPy installation help page http://www.scipy.org/Installing_SciPy/Linux What's there now is a link to "the unofficial instructions by written by Steve Baum.". That is a long and impressively detailed document which will be very helpful to some experts. But I think it would help a lot of simple folk to also see the recipe I just gave above. How can we add this info to that web page? From mforbes at alum.mit.edu Fri Jun 16 01:38:20 2006 From: mforbes at alum.mit.edu (Michael McNeil Forbes) Date: Thu, 15 Jun 2006 22:38:20 -0700 (PDT) Subject: [SciPy-user] Polylogarithms Message-ID: Hi, Is anyone aware of good python algorithms for computing polylogarithms? (The principle branch of the following sum analytically continued for larger x). Li[s](x) = Sum(x^n/n^s,n=1..infinity) Maple offers this as polylog(s,x). There are a few special cases (dilogs for example with s=2) floating around on the internet and in the GNU scientific library (the Fermi-Dirac functions) but I have not been able to find a good algorithm for general s and x (non-integer s). Thanks, Michael. From cookedm at physics.mcmaster.ca Fri Jun 16 02:17:32 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 16 Jun 2006 02:17:32 -0400 Subject: [SciPy-user] Polylogarithms In-Reply-To: References: Message-ID: <20060616061732.GA16404@arbutus.physics.mcmaster.ca> On Thu, Jun 15, 2006 at 10:38:20PM -0700, Michael McNeil Forbes wrote: > Hi, > > Is anyone aware of good python algorithms for computing polylogarithms? > (The principle branch of the following sum analytically continued for > larger x). > > Li[s](x) = Sum(x^n/n^s,n=1..infinity) > > Maple offers this as polylog(s,x). There are a few special cases (dilogs > for example with s=2) floating around on the internet and in the GNU > scientific library (the Fermi-Dirac functions) but I have not been able to > find a good algorithm for general s and x (non-integer s). It's not going to help you for general s, but the Cephes library has it for integer s: http://www.moshier.net/#Cephes (And no, I don't know why we don't have it scipy.special. I guess I should work on ticket #15.) Here's an implementation of Lerch Phi (polylog(s,x) = x LerchPhi(x, s, 1)): http://www.mpi-hd.mpg.de/personalhomes/ulj/jentschura/Source/ It doesn't say what licence it's under, but you can contact the authors. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From arnd.baecker at web.de Fri Jun 16 02:32:40 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 16 Jun 2006 08:32:40 +0200 (CEST) Subject: [SciPy-user] SciPy web docs: add FC5 install instructions In-Reply-To: References: Message-ID: On Thu, 15 Jun 2006, Mark Alford wrote: > I just installed SciPy on Fedora Core 5. After some work, I found out > that it was easy: > > yum install numpy > yum install lapack-devel blas-devel > setenv ATLAS /usr/lib/atlas # Did this achieve anything? > Download and unzip scipy-0.4.8 to /usr/local/src/ > cd /usr/local/src/scipy-0.4.8 # 0.4.8 is compatible with FC5 numpy > python setup.py install >& install.log > > > Shouldn't this info be on the SciPy installation help page > http://www.scipy.org/Installing_SciPy/Linux > > What's there now is a link to "the unofficial instructions by written by > Steve Baum.". That is a long and impressively detailed document which > will be very helpful to some experts. But I think it would help a lot of > simple folk to also see the recipe I just gave above. How can we add > this info to that web page? That is quite easy: get yourself an account for the wiki (under "User"/"Login" - see "First time") and add the above. I think that we should collect information like this for all relevant distributions, including installations from source. So a structure for that page *might* be Distributions ============= - Fedora Core 5 - SUSE 10.1 - debian sarge - ubuntu (whatever the present name is ;-) - ... Installation from Source ======================== Here we could list the required packages for each distribution and mayabe refer to the generic installation notes http://www.scipy.org/Installing_SciPy/BuildingGeneral (there is also the non-authorative stuff http://www.scipy.org/ArndBaecker/InstallScipy ) Best, Arnd From mforbes at alum.mit.edu Fri Jun 16 02:52:56 2006 From: mforbes at alum.mit.edu (Michael McNeil Forbes) Date: Thu, 15 Jun 2006 23:52:56 -0700 (PDT) Subject: [SciPy-user] Polylogarithms In-Reply-To: <20060616061732.GA16404@arbutus.physics.mcmaster.ca> References: <20060616061732.GA16404@arbutus.physics.mcmaster.ca> Message-ID: Thanks! That looks like it should work for me, but the licence is rather restrictive, requiring citation if used for publication and permission for commercial use. (See the documentation below.) http://www.mpi-hd.mpg.de/personalhomes/ulj/jentschura/downloads.html I assume this is inconsistent with scipy's licensing requirements. Michael. > On Thu, Jun 15, 2006 at 10:38:20PM -0700, Michael McNeil Forbes wrote: > > Is anyone aware of good python algorithms for computing polylogarithms? ... > Here's an implementation of Lerch Phi (polylog(s,x) = x LerchPhi(x, s, 1)): > http://www.mpi-hd.mpg.de/personalhomes/ulj/jentschura/Source/ > It doesn't say what licence it's under, but you can contact the authors. From lev at columbia.edu Fri Jun 16 10:15:45 2006 From: lev at columbia.edu (Lev Givon) Date: Fri, 16 Jun 2006 10:15:45 -0400 Subject: [SciPy-user] Low pass filter design and usage In-Reply-To: <7B4586F966AF25489F9D6CE0C5F97262499D92@svits27.main.ad.rit.edu> References: <7B4586F966AF25489F9D6CE0C5F97262499D92@svits27.main.ad.rit.edu> Message-ID: <20060616141545.GC19539@avicenna.cc.columbia.edu> Received from Joseph Kardamis (RIT Student) on Thu, Jun 15, 2006 at 10:57:27AM EDT: > > I need help in creating a low pass filter; I have very little > experience in digital signal processing, if any. The context of the > lowpass filter is as such: an audio signal is split into an array of > subband signals attuned to particular frequencies. These subband > signals are then processed with full-wave compression, half-wave > rectification, and lowpass filtering. It seems that the > scipy.signal module should have what I need to perform lowpass > filtering, but my lack of knowledge in DSP is making it difficult to > know what I'm doing or where even to start. > > For each band, I know the center frequency of the band, and the > lowpass filtering occurs at twice the center frequency. > > Any help would be greatly appreciated. > > Cheers, > -Joe Mathworks' documentation for Matlab's Signal Processing Toolbox is a good starting point; it contains a number of examples that can be recoded in Python using the DSP functions in scipy.signal: http://www.mathworks.com/access/helpdesk/help/toolbox/signal/ L.G. From nomo17k at gmail.com Fri Jun 16 12:25:02 2006 From: nomo17k at gmail.com (Taro Sato) Date: Fri, 16 Jun 2006 16:25:02 +0000 (UTC) Subject: [SciPy-user] error importing scipy.weave References: Message-ID: Taro Sato gmail.com> writes: > > I just checked out scipy from subversion (0.5.0.1962), and after > building and installing following: > > ... Just for completeness...the latest revision from subversion fixed this problem already. Thanks for your response. Taro From nomo17k at gmail.com Fri Jun 16 12:28:59 2006 From: nomo17k at gmail.com (Taro Sato) Date: Fri, 16 Jun 2006 16:28:59 +0000 (UTC) Subject: [SciPy-user] warnings in leastsq and fsolve in scipy.optimize Message-ID: Hi there. I have a suggestion for improving leastsq and fsolve in scipy.optimize. One thing I don't like about these routines is that depending on the stopping condition (usually when solutions do not converge for various reasons), these functions print out a "warning" message to stdout, simply using the built-in print statement; the stopping condition is returned from the functions in any case, so users can trap and handle them accordingly if they choose to do so. I think it's better to at least print out this kind of information using something like warnings module facilities, so that users can at least suppress the message if they wish. What do you think??? Thank you for your time, Taro From a.mcmorland at auckland.ac.nz Fri Jun 16 23:56:04 2006 From: a.mcmorland at auckland.ac.nz (Angus McMorland) Date: Sat, 17 Jun 2006 15:56:04 +1200 Subject: [SciPy-user] Cookbook: Interpolation of an N-D curve error Message-ID: <44937D54.5010108@auckland.ac.nz> Hi all, I'm getting a TypeError when trying the N-D curve cookbook example (http://www.scipy.org/Cookbook/Interpolation) with numpy 0.9.9.2630 and scipy 0.5.0.1979. The error is: In [158]: tckp,u = splprep([x,y,z],s=s,k=k,nest=-1) --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) [snip] /usr/lib/python2.3/site-packages/scipy/interpolate/fitpack.py in splprep(x, w, u, ub, ue, k, task, s, t, full_output, nest, per, quiet) 215 iwrk=_parcur_cache['iwrk'] 216 t,c,o=_fitpack._parcur(ravel(transpose(x)),w,u,ub,ue,k,task,ipar,s,t, --> 217 nest,wrk,iwrk,per) 218 _parcur_cache['u']=o['u'] 219 _parcur_cache['ub']=o['ub'] TypeError: array cannot be safely cast to required type I'm guessing either (a) something is wrong in the code, or more likely, (b) something's been deliberately changed and the example needs updating, or (a distinct possibility) (c) I'm doing something wrong. Cheers, Angus. -- Angus McMorland email a.mcmorland at auckland.ac.nz mobile +64-21-155-4906 PhD Student, Neurophysiology / Multiphoton & Confocal Imaging Physiology, University of Auckland phone +64-9-3737-599 x89707 Armourer, Auckland University Fencing Secretary, Fencing North Inc. From agree74 at hotmail.com Sat Jun 17 02:17:35 2006 From: agree74 at hotmail.com (A. Rachel Grinberg) Date: Fri, 16 Jun 2006 23:17:35 -0700 Subject: [SciPy-user] Python - Matlab least squares difference Message-ID: Robert, Thanks for your response. The solution to the least square problem minimizes the l2 norm of the residuals. In my example the residual of Matlab's solution is ||A*(0,0,1)'-b|| = ||0|| = 0, whereas python's solution yields a number that is very close to zero. Rachel A. Rachel Grinberg wrote: >Hi, > >I noticed a difference between the linear least square solutions in Python >and Matlab. I understand that if the system is underdetermined the solution >is not going to be unique, nevertheless I would like figure out the >algorithm Matlab is using, since it seems "better" to me. For example, >let's say I have a Matrix > 2 1 0 >A = 1 1 1 and b = (0,1)' > >While Matlab yields (0,0,1)' as the solution to A\b, scipy's result for >linalg.linear_least_squares(A,b) is > >array([[-0.16666667], > [ 0.33333333], > [ 0.83333333]]) > >Any ideas, how Matlab's A\b is implemented? Not sure, but it's wrong (more or less). In underdetermined linear least squares problems, it is conventional to choose the solution that has minimum L2-norm. >>>A = array([[2., 1, 0], [1, 1, 1]]) >>>A array([[ 2., 1., 0.], [ 1., 1., 1.]]) >>>b = array([0., 1.]) >>>linalg.lstsq(A, b) (array([-0.16666667, 0.33333333, 0.83333333]), array([], dtype=float64), 2, array([ 2.6762432 , 0.91527173])) >>>x = _[0] >>>dot(x, x) 0.83333333333333426 >>>dot([0., 0, 1], [0., 0, 1]) 1.0 Implementations that do something else have some 'splainin' to do. Of course, A\b may not quite be "do linear least squares" in Matlab but something else. I don't know. FWIW, Octave gives me the same answer as numpy/scipy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today - it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ From agree74 at hotmail.com Sat Jun 17 02:21:28 2006 From: agree74 at hotmail.com (A. Rachel Grinberg) Date: Fri, 16 Jun 2006 23:21:28 -0700 Subject: [SciPy-user] Python - Matlab least squares difference Message-ID: Robert, Thanks for your response. The solution to the least square problem minimizes the l2 norm of the residuals. In my example the residual of Matlab's solution is ||A*(0,0,1)'-b|| = ||0|| = 0, whereas python's solution yields a number that is very close to zero. Rachel A. Rachel Grinberg wrote: >Hi, > >I noticed a difference between the linear least square solutions in Python >and Matlab. I understand that if the system is underdetermined the solution >is not going to be unique, nevertheless I would like figure out the >algorithm Matlab is using, since it seems "better" to me. For example, >let's say I have a Matrix > 2 1 0 >A = 1 1 1 and b = (0,1)' > >While Matlab yields (0,0,1)' as the solution to A\b, scipy's result for >linalg.linear_least_squares(A,b) is > >array([[-0.16666667], > [ 0.33333333], > [ 0.83333333]]) > >Any ideas, how Matlab's A\b is implemented? Not sure, but it's wrong (more or less). In underdetermined linear least squares problems, it is conventional to choose the solution that has minimum L2-norm. >>>A = array([[2., 1, 0], [1, 1, 1]]) >>>A array([[ 2., 1., 0.], [ 1., 1., 1.]]) >>>b = array([0., 1.]) >>>linalg.lstsq(A, b) (array([-0.16666667, 0.33333333, 0.83333333]), array([], dtype=float64), 2, array([ 2.6762432 , 0.91527173])) >>>x = _[0] >>>dot(x, x) 0.83333333333333426 >>>dot([0., 0, 1], [0., 0, 1]) 1.0 Implementations that do something else have some 'splainin' to do. Of course, A\b may not quite be "do linear least squares" in Matlab but something else. I don't know. FWIW, Octave gives me the same answer as numpy/scipy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today - it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ From robert.kern at gmail.com Sat Jun 17 03:07:57 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 17 Jun 2006 02:07:57 -0500 Subject: [SciPy-user] Python - Matlab least squares difference In-Reply-To: References: Message-ID: <4493AA4D.5020905@gmail.com> A. Rachel Grinberg wrote: > Robert, > > Thanks for your response. > > The solution to the least square problem minimizes the l2 norm of the > residuals. Just to be clear, my point was that when the least squares problem is underdetermined, there are an infinite number of vectors that minimize the L2-norm of the residual equally. It is conventional to regularize the problem by choosing the single vector that has the minimum L2-norm from that infinite set. Sometimes because it's the choice that makes the most sense, but mostly because the SVD gives it to you automatically. > In my example the residual of Matlab's solution is > ||A*(0,0,1)'-b|| = ||0|| = 0, whereas python's solution yields a number that > is very close to zero. Only because 0.0 and 1.0 and 2.0 can be exactly described in floating point arithmetic (and presumably only because Matlab pulled its (unconventional) answer out of thin air magically rather than actually doing the floating point calculations that it should be doing). numpy's answer is as close to 0 as you can reasonably expect given floating-point arithmetic. Matlab's answer is not actually better than numpy's. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josh8912 at yahoo.com Sat Jun 17 13:59:07 2006 From: josh8912 at yahoo.com (JJ) Date: Sat, 17 Jun 2006 10:59:07 -0700 (PDT) Subject: [SciPy-user] error on testing installed package and question on calling lapack functions Message-ID: <20060617175907.94077.qmail@web51705.mail.yahoo.com> Hello all: First a quick question on calling lapack functions. It seems that scipy does not have a function for calculating the reciprocal condition number of a matrix. If I can pass a matrix to the lapack function DGECON I can get the values I need, but I dont understand the syntax for calling lapack functions. I read about lapack.get_lapack_funcs but still am confused. Could someone offer a simple example to call a lapack function from python? Second, with some effort I got AMD, numpy, and scipy to install on my AMD 64 bit machine using the amcl libs. I did install cblas but as far as I could tell, neither numpy nor scipy linked to the libraries. I did get one error when I ran the tests after installing. If anyone is interested, here it is: > ---------- > FAIL: check_cdf > (scipy.stats.tests.test_distributions.test_f) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "", line 9, in check_cdf > AssertionError: D = 0.302318897387; pval = > 0.00313615240765; alpha = 0.01 > args = (1.0151727844701894, 1.8204690429811792) > ----------------------- Also, for what its worth, the only way I could get numpy to install correctly was by using gfortran as the compiler and linking to the gfortran libs. JJ __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From agree74 at hotmail.com Sat Jun 17 17:50:18 2006 From: agree74 at hotmail.com (A. Rachel Grinberg) Date: Sat, 17 Jun 2006 14:50:18 -0700 Subject: [SciPy-user] Python - Matlab least squares difference Message-ID: Robert, You are absolutely right. If the system is underdetermined there are infinitely many solutions. Still, Matlab's solution yields residual = 0, and scipy gives you a VERY good approximation, though there are infinitely many "true" solutions. I didn't get a chance to actually test out Matlab's accuracy. Meaning I don't really know that the example I had was a coincidence, of if Matlab always gives you solution with residual = 0 in case of an undwerdetermined system. If later is the case, than I would say that Matlab's algorithm is better. Rachel A. Rachel Grinberg wrote: >Robert, > >Thanks for your response. > >The solution to the least square problem minimizes the l2 norm of the >residuals. Just to be clear, my point was that when the least squares problem is underdetermined, there are an infinite number of vectors that minimize the L2-norm of the residual equally. It is conventional to regularize the problem by choosing the single vector that has the minimum L2-norm from that infinite set. Sometimes because it's the choice that makes the most sense, but mostly because the SVD gives it to you automatically. >In my example the residual of Matlab's solution is ||A*(0,0,1)'-b|| = ||0|| >= 0, whereas python's solution yields a number that is very close to zero. Only because 0.0 and 1.0 and 2.0 can be exactly described in floating point arithmetic (and presumably only because Matlab pulled its (unconventional) answer out of thin air magically rather than actually doing the floating point calculations that it should be doing). numpy's answer is as close to 0 as you can reasonably expect given floating-point arithmetic. Matlab's answer is not actually better than numpy's. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _________________________________________________________________ Don?t just search. Find. Check out the new MSN Search! http://search.msn.click-url.com/go/onm00200636ave/direct/01/ From robert.kern at gmail.com Sat Jun 17 18:40:33 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 17 Jun 2006 17:40:33 -0500 Subject: [SciPy-user] Python - Matlab least squares difference In-Reply-To: References: Message-ID: <449484E1.3000302@gmail.com> A. Rachel Grinberg wrote: > Robert, > > You are absolutely right. If the system is underdetermined there are > infinitely many solutions. Still, Matlab's solution yields residual = 0, > and scipy gives you a VERY good approximation, though there are > infinitely many "true" solutions. I didn't get a chance to actually > test out Matlab's accuracy. Meaning I don't really know that the example > I had was a coincidence, of if Matlab always gives you solution with > residual = 0 in case of an undwerdetermined system. If later is the > case, than I would say that Matlab's algorithm is better. Nonsense. The result you got was an accident of floating point arithmetic in that all of the values there were integers and thus exactly representible in floating point. Matlab's algorithm will not give you exact zeros for every problem. And even if it did it still wouldn't be better. If you were to do the problem in exact arithmetic, you will find a particular line in RR^n along which each point is a solution to the LLS problem. For each point along that line, there is a point in FF^n (the set of n-tuples of floating point numbers, a strict subset of RR^n) that is closest to the point in RR^n. Any of those FF^n points are perfectly valid answers when using floating point arithmetic to solve the problem. If you take into account the accumulation of floating point error, the set of acceptable points in FF^n is a bit wider, forming a rough hypercylinder around the exact line. However, there is the regularization convention. There is precisely one vector in RR^n that is the correct answer under the "minimum L2-norm of x" convention. The set of acceptable floating point answers is in a small hypersphere around that point. Answers outside that hypersphere are not following the convention. Just because the floating point error happens to be accidentally less in some other region of the line does not mean that other region is a better answer. It is not. It is an accident of floating point, not a consequence of the actual problem you are trying to solve. The latter must always take precedence over the former. I recommend reading the first volume of _Numerical Computation_ by Christoph W. Ueberhuber for a thorough discussion of the semantic issues behind numerical modelling with floating point. http://www.amazon.com/gp/product/3540620583/ref=pd_bxgy_img_b/002-3187502-6538449?%5Fencoding=UTF8 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jeremy at jeremysanders.net Sun Jun 18 07:45:37 2006 From: jeremy at jeremysanders.net (Jeremy Sanders) Date: Sun, 18 Jun 2006 12:45:37 +0100 (BST) Subject: [SciPy-user] [ANN] Veusz 0.10 - a scientific plotting package Message-ID: Veusz 0.10 is now available. Veusz is a scientific plotting package written in Python using PyQt designed to produce publication-ready postscript graphs. It allows plots to be easily constructed, and provides a simple graphical user interface, command line interface, and can be embedded within other Python programs or called externally. It also allows data to be imported using numerous formats and edited. It can make a variety of different graphs including XY, histograms, images and contour plots. Changes in this version include several user interface enhancements, and CSV files can be imported, plus lots more. Download the latest version from http://download.gna.org/veusz/ or see the homepage http://home.gna.org/veusz/ for more details. Jeremy -- Jeremy Sanders http://www.jeremysanders.net/ Cambridge, UK Public Key Server PGP Key ID: E1AAE053 From nwagner at iam.uni-stuttgart.de Mon Jun 19 04:04:31 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 19 Jun 2006 10:04:31 +0200 Subject: [SciPy-user] FAIL: check_simple (scipy.optimize.tests.test_cobyla.test_cobyla) Message-ID: <44965A8F.1030205@iam.uni-stuttgart.de> Hi all, Can someone confirm this failure ====================================================================== FAIL: check_simple (scipy.optimize.tests.test_cobyla.test_cobyla) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/optimize/tests/test_cobyla.py", line 18, in check_simple decimal=5) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 152, in assert_almost_equal return assert_array_almost_equal(actual, desired, decimal, err_msg) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 222, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 207, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.957975 , 0.64690335]) y: array([ 4.95535625, 0.66666667]) >>> numpy.__version__ '0.9.9.2631' >>> scipy.__version__ '0.5.0.1980' From scipy at mspacek.mm.st Mon Jun 19 05:02:57 2006 From: scipy at mspacek.mm.st (Martin Spacek) Date: Mon, 19 Jun 2006 02:02:57 -0700 Subject: [SciPy-user] histogram(a, normed=True) doesn't normalize? Message-ID: <44966841.3000005@mspacek.mm.st> I've searched around on this and I can't find anything. I'm confused by the 'normed' argument in histogram(). According to the numpy book: "If normed is True, then the histogram will be normalized and comparable with a probability density function, otherwise it will be a count of the number of items in each bin." The sum of the heights of all the bins should be 1 for a PDF (right?). But I get the following in numpy 0.9.8: >>> import numpy as np >>> np.histogram([1,2,3], bins=3, normed=False) (array([1, 1, 1]), array([ 1., 1.66666667, 2.33333333])) >>> np.histogram([1,2,3], bins=3, normed=True) (array([ 0.5, 0.5, 0.5]), array([ 1., 1.66666667, 2.33333333])) Adding up the bins gives 1.5 in this case. Here's the code: C:\bin\Python24\Lib\site-packages\numpy\lib\function_base.py: def histogram(a, bins=10, range=None, normed=False): a = asarray(a).ravel() if not iterable(bins): if range is None: range = (a.min(), a.max()) mn, mx = [mi+0.0 for mi in range] if mn == mx: mn -= 0.5 mx += 0.5 bins = linspace(mn, mx, bins, endpoint=False) n = sort(a).searchsorted(bins) n = concatenate([n, [len(a)]]) n = n[1:]-n[:-1] if normed: db = bins[1] - bins[0] return 1.0/(a.size*db) * n, bins else: return n, bins From what I can tell, normed normalizes n by the total span of the bins, which seems an odd thing to do. Here's my interpretation of what it should do: if normed: return 1.0/sum(n) * n, bins else: return n, bins which then gives me: >>> np.histogram([1,2,3], bins=3, normed=True) (array([ 0.33333333, 0.33333333, 0.33333333]), array([ 1., 1.66666667, 2.33333333])) Which adds to 1. Am I way off on this? Cheers, Martin From reidar.hagen at gmail.com Mon Jun 19 06:21:46 2006 From: reidar.hagen at gmail.com (Reidar Strand Hagen) Date: Mon, 19 Jun 2006 12:21:46 +0200 Subject: [SciPy-user] Win32 scipy binary; Python 2.4, Numeric Message-ID: Hi Due to wanting to keep versions as similar on Linux and Windows as possible (and py2exe problems), I'm currently looking for windows binaries of Scipy 0.3.X for python 2.4 (the ones depending on Numeric). Does anyone have these, or if it's actually possible, later versions with dependencies on Numeric and python 2.4, available and thus saving me the trouble of trying and subsequently failing to compile them myself ? regards Reidar Strand Hagen From a.u.r.e.l.i.a.n at gmx.net Mon Jun 19 09:24:12 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Mon, 19 Jun 2006 15:24:12 +0200 Subject: [SciPy-user] Win32 scipy binary; Python 2.4, Numeric In-Reply-To: References: Message-ID: <200606191524.12555.a.u.r.e.l.i.a.n@gmx.net> On Monday 19 June 2006 12:21, Reidar Strand Hagen wrote: > Hi > > Due to wanting to keep versions as similar on Linux and Windows as > possible (and py2exe problems), I'm currently looking for windows > binaries of Scipy 0.3.X for python 2.4 (the ones depending on > Numeric). > > Does anyone have these, or if it's actually possible, later versions > with dependencies on Numeric and python 2.4, available and thus > saving me the trouble of trying and subsequently failing to compile > them myself ? As far as I know there never was a SciPy/Numeric binary for Python 2.4. Regarding compiling, have a look at http://scipy.org/Installing_SciPy/Windows. Johannes From reidar.hagen at gmail.com Mon Jun 19 09:36:37 2006 From: reidar.hagen at gmail.com (Reidar Strand Hagen) Date: Mon, 19 Jun 2006 15:36:37 +0200 Subject: [SciPy-user] Win32 scipy binary; Python 2.4, Numeric In-Reply-To: <200606191524.12555.a.u.r.e.l.i.a.n@gmx.net> References: <200606191524.12555.a.u.r.e.l.i.a.n@gmx.net> Message-ID: On 19/06/06, Johannes Loehnert wrote: > On Monday 19 June 2006 12:21, Reidar Strand Hagen wrote: > > Hi > > > > Due to wanting to keep versions as similar on Linux and Windows as > > possible (and py2exe problems), I'm currently looking for windows > > binaries of Scipy 0.3.X for python 2.4 (the ones depending on > > Numeric). > > > > Does anyone have these, or if it's actually possible, later versions > > with dependencies on Numeric and python 2.4, available and thus > > saving me the trouble of trying and subsequently failing to compile > > them myself ? > > As far as I know there never was a SciPy/Numeric binary for Python 2.4. > Regarding compiling, have a look at > http://scipy.org/Installing_SciPy/Windows. > > Johannes > Thanks. As far as I can tell, those instructions are not appliciable for scipy 0.3.X, and I was under the impression that it was impossible to use 0.4.X with Numeric instead of numpy? I've browsed the old.scipy.org site, and tried to build scipy 0.3.X, but I'm failing to make it recognize the needed libraries (such as blas). I've tried setting the paths both through environment variables and the site.cfg file, but no luck on first try atleast. Reidar Strand Hagen From david.huard at gmail.com Mon Jun 19 09:42:28 2006 From: david.huard at gmail.com (David Huard) Date: Mon, 19 Jun 2006 09:42:28 -0400 Subject: [SciPy-user] histogram(a, normed=True) doesn't normalize? In-Reply-To: <44966841.3000005@mspacek.mm.st> References: <44966841.3000005@mspacek.mm.st> Message-ID: <91cf711d0606190642r7e5cac28w1842d2b472666f45@mail.gmail.com> Yes. The normalization constant is not the sum of the bin, but the sum of the product of the bin count with the bin width : \sum_i f_i * delta_i David -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdavis at staffmail.ed.ac.uk Mon Jun 19 09:51:36 2006 From: cdavis at staffmail.ed.ac.uk (Cory Davis) Date: Mon, 19 Jun 2006 14:51:36 +0100 Subject: [SciPy-user] rescuing old pickled numpy arrays Message-ID: <4496ABE8.3040104@staffmail.ed.ac.uk> Hi All, this relates to a question I raised a few weeks ago about pickled arrays from older Numpy versions not loading. Here is what happens when I try to load my old data ... ---> 51 return cPickle.load(infile) AttributeError: 'module' object has no attribute 'dtypedescr' Can anyone suggest an easy way to rescue this data? Cheers, Cory. From david.huard at gmail.com Mon Jun 19 10:00:08 2006 From: david.huard at gmail.com (David Huard) Date: Mon, 19 Jun 2006 10:00:08 -0400 Subject: [SciPy-user] Cookbook: Interpolation of an N-D curve error In-Reply-To: <44937D54.5010108@auckland.ac.nz> References: <44937D54.5010108@auckland.ac.nz> Message-ID: <91cf711d0606190700gff610d2o361b5a4d6e902956@mail.gmail.com> Hi Angus, I updated scipy and numpy this morning, and the example ran fine, except for the import Float64 statement that has to be changed to float64. You may want to turn pdb on in ipython and find out what variable is triggering the exception. I'd like to help more but I have no idea what is going on in your case. Did you modify the example or ran it as is? David 2006/6/16, Angus McMorland : > > Hi all, > > I'm getting a TypeError when trying the N-D curve cookbook example > (http://www.scipy.org/Cookbook/Interpolation) with numpy 0.9.9.2630 and > scipy 0.5.0.1979. > > The error is: > > In [158]: tckp,u = splprep([x,y,z],s=s,k=k,nest=-1) > > --------------------------------------------------------------------------- > exceptions.TypeError Traceback (most > recent call last) > > [snip] > > /usr/lib/python2.3/site-packages/scipy/interpolate/fitpack.py in > splprep(x, w, u, ub, ue, k, task, s, t, full_output, nest, per, quiet) > 215 iwrk=_parcur_cache['iwrk'] > 216 > t,c,o=_fitpack._parcur(ravel(transpose(x)),w,u,ub,ue,k,task,ipar,s,t, > --> 217 nest,wrk,iwrk,per) > 218 _parcur_cache['u']=o['u'] > 219 _parcur_cache['ub']=o['ub'] > > TypeError: array cannot be safely cast to required type > > I'm guessing either (a) something is wrong in the code, or more likely, > (b) something's been deliberately changed and the example needs > updating, or (a distinct possibility) (c) I'm doing something wrong. > > Cheers, > > Angus. > -- > Angus McMorland > email a.mcmorland at auckland.ac.nz > mobile +64-21-155-4906 > > PhD Student, Neurophysiology / Multiphoton & Confocal Imaging > Physiology, University of Auckland > phone +64-9-3737-599 x89707 > > Armourer, Auckland University Fencing > Secretary, Fencing North Inc. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jf.moulin at gmail.com Mon Jun 19 10:54:56 2006 From: jf.moulin at gmail.com (Jean-Francois Moulin) Date: Mon, 19 Jun 2006 16:54:56 +0200 Subject: [SciPy-user] FFT2d Message-ID: Hi! I need to perform some 2d fft analysis and I got some problem... If I understood well fft2d is not yet implemented in numpy... I installed Numeric and do: import FFT f=fft2d(foo) and got the message "fft2d is not defined" Any clue? Thanks a lot for your help, JF From robert.kern at gmail.com Mon Jun 19 11:01:56 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 19 Jun 2006 10:01:56 -0500 Subject: [SciPy-user] FFT2d In-Reply-To: References: Message-ID: <4496BC64.80705@gmail.com> Jean-Francois Moulin wrote: > Hi! > > I need to perform some 2d fft analysis and I got some problem... > If I understood well fft2d is not yet implemented in numpy... I > installed Numeric and do: > > import FFT > f=fft2d(foo) > and got the message "fft2d is not defined" That should be FFT.fft2d(foo) (supposing you also actually had foo defined; please copy-and-paste small examples with the exact error messages they produce rather than retyping). However, numpy has 2D FFTs: In [1]: from numpy import dft In [2]: dft.fft2? Type: function Base Class: String Form: Namespace: Interactive File: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy-0.9.9.2631-py2.4-macosx-10.4-ppc.egg/numpy/dft/fftpack.py Definition: dft.fft2(a, s=None, axes=(-2, -1)) Docstring: fft2(a, s=None, axes=(-2,-1)) The 2d fft of a. This is really just fftnd with different default behavior. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Mon Jun 19 11:02:17 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 19 Jun 2006 17:02:17 +0200 Subject: [SciPy-user] FFT2d In-Reply-To: References: Message-ID: <4496BC79.2040709@iam.uni-stuttgart.de> Jean-Francois Moulin wrote: > Hi! > > I need to perform some 2d fft analysis and I got some problem... > If I understood well fft2d is not yet implemented in numpy... I > installed Numeric and do: > > import FFT > f=fft2d(foo) > and got the message "fft2d is not defined" > Any clue? > > Thanks a lot for your help, > > JF > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user help (numpy.dft.fft2) help (scipy.signal.fft2) Nils From jf.moulin at gmail.com Mon Jun 19 11:19:21 2006 From: jf.moulin at gmail.com (JF Moulin) Date: Mon, 19 Jun 2006 15:19:21 +0000 (UTC) Subject: [SciPy-user] FFT2d References: Message-ID: Thank you! you solved my problem! Sorry for not having foud it in the doc... it is a bit of a maze for a newbie... ;0) JF From nwagner at iam.uni-stuttgart.de Mon Jun 19 11:52:04 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 19 Jun 2006 17:52:04 +0200 Subject: [SciPy-user] Extract local minima Message-ID: <4496C824.5090900@iam.uni-stuttgart.de> Hi all, Is there a way to extract local minima from the column vector stored in data.mtx ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 37743 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: data.mtx URL: From robert.kern at gmail.com Mon Jun 19 12:07:17 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 19 Jun 2006 11:07:17 -0500 Subject: [SciPy-user] Extract local minima In-Reply-To: <4496C824.5090900@iam.uni-stuttgart.de> References: <4496C824.5090900@iam.uni-stuttgart.de> Message-ID: <4496CBB5.3020206@gmail.com> Nils Wagner wrote: > Hi all, > > Is there a way to extract local minima from the column vector stored in > data.mtx ? The general scheme would be to look for elements x[i] such that (x[i-1] > x[i]) & (x[i+1] > x[i]). (untested) import numpy as np x = ... xp = x[2:] xm = x[:-2] x0 = x[1:-1] minima = np.where((xp > x0) & (xm > x0))[0] + 1 There are some additional things you will want to do to make it robust, like checking the endpoints and looking places where there are equal values. These are left as an exercise for the reader. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From strawman at astraw.com Mon Jun 19 12:32:44 2006 From: strawman at astraw.com (Andrew Straw) Date: Mon, 19 Jun 2006 09:32:44 -0700 Subject: [SciPy-user] updated Ubuntu Dapper packages for numpy, matplotlib, and scipy online Message-ID: <4496D1AC.8030100@astraw.com> I have updated the apt repository I maintain for Ubuntu's Dapper, which now includes: numpy matplotlib scipy Each package is from a recent SVN checkout and should thus be regarded as "bleeding edge". The repository has a new URL: http://debs.astraw.com/dapper/ I intend to keep this repository online for an extended duration. If you want to put this repository in your sources list, you need to add the following lines to /etc/apt/sources.list:: deb http://debs.astraw.com/ dapper/ deb-src http://debs.astraw.com/ dapper/ I have not yet investigated the use of ATLAS in building or using the numpy binaries, and if performance is critical for you, please evaluate speed before using it. I intend to visit this issue, but I cannot say when. The Debian source packages were generated using stdeb, [ http://stdeb.python-hosting.com/ ] a Python to Debian source package conversion utility I wrote. stdeb does not build packages that follow the Debian Python Policy, so the packages here may be slighly unusual compared to Python packages in the official Debian or Ubuntu repositiories. For example, example scripts do not get installed, and no documentation is installed. Future releases of stdeb may resolve these issues. As always, feedback is very appreciated. Cheers! Andrew From nwagner at iam.uni-stuttgart.de Mon Jun 19 14:33:50 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 19 Jun 2006 20:33:50 +0200 Subject: [SciPy-user] Extract local minima In-Reply-To: <4496CBB5.3020206@gmail.com> References: <4496C824.5090900@iam.uni-stuttgart.de> <4496CBB5.3020206@gmail.com> Message-ID: On Mon, 19 Jun 2006 11:07:17 -0500 Robert Kern wrote: > Nils Wagner wrote: >> Hi all, >> >> Is there a way to extract local minima from the column >>vector stored in >> data.mtx ? > > The general scheme would be to look for elements x[i] >such that (x[i-1] > x[i]) > & (x[i+1] > x[i]). > > (untested) > > import numpy as np > > x = ... > > xp = x[2:] > xm = x[:-2] > x0 = x[1:-1] > minima = np.where((xp > x0) & (xm > x0))[0] + 1 > > > There are some additional things you will want to do to >make it robust, like > checking the endpoints and looking places where there >are equal values. These > are left as an exercise for the reader. > > -- > Robert Kern > > "I have come to believe that the whole world is an >enigma, a harmless enigma > that is made terrible by our own mad attempt to >interpret it as though it had > an underlying truth." > -- Umberto Eco > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user Hi Robert, Thank you very much ! The result is promising. from scipy import * from pylab import plot, show, scatter x = rand(100) xp = x[2:] xm = x[:-2] x0 = x[1:-1] minima=where((xp > x0) & (xm > x0))[0]+1 plot(arange(0,len(x)),x,'r-') scatter(minima,x[minima],s=4) show() Nils From cookedm at physics.mcmaster.ca Mon Jun 19 16:03:34 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 19 Jun 2006 16:03:34 -0400 Subject: [SciPy-user] rescuing old pickled numpy arrays In-Reply-To: <4496ABE8.3040104@staffmail.ed.ac.uk> References: <4496ABE8.3040104@staffmail.ed.ac.uk> Message-ID: <20060619200334.GA19061@arbutus.physics.mcmaster.ca> On Mon, Jun 19, 2006 at 02:51:36PM +0100, Cory Davis wrote: > Hi All, > this relates to a question I raised a few weeks ago about pickled arrays > from older Numpy versions not loading. Here is what happens when I try > to load my old data ... > > ---> 51 return cPickle.load(infile) > > AttributeError: 'module' object has no attribute 'dtypedescr' > > Can anyone suggest an easy way to rescue this data? The easy way I suppose would be to inject a 'dtypedescr' into the appropiate module. Have a look at the pickletools module. pickletools.dis() will disassemble the pickle, so you could figure out what it's looking for. At worst, you could use pickletools.genops() to pull out the data. Can you send me a copy of a pickle? I may be able to cook up an upgrade routine. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From aisaac at american.edu Mon Jun 19 17:18:56 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 19 Jun 2006 17:18:56 -0400 Subject: [SciPy-user] Extract local minima In-Reply-To: References: <4496C824.5090900@iam.uni-stuttgart.de><4496CBB5.3020206@gmail.com> Message-ID: On Mon, 19 Jun 2006, Nils Wagner apparently wrote: > Is there a way to extract local minima from the column > vector stored in data.mtx ? See below. Cheers, Alan Isaac from numpy import arange, rand,ones, dtype, empty from pylab import plot, show, scatter def find_minima(x): minima = empty((len(x),),dtype=bool) dx = x[1:]-x[:-1] minima[1:-1] = (dx[:-1]<=0) & (dx[1:]>=0) #handle endpoints minima[0]=dx[0]>=0 minima[-1]=dx[-1]<=0 return minima NOBS=100 x = rand(NOBS) minima = find_minima(x) domain = arange(NOBS) plot(domain,x,'r-') scatter(domain[minima],x[minima],s=4) From pebarrett at gmail.com Mon Jun 19 17:16:59 2006 From: pebarrett at gmail.com (Paul Barrett) Date: Mon, 19 Jun 2006 17:16:59 -0400 Subject: [SciPy-user] Extract local minima In-Reply-To: <4496C824.5090900@iam.uni-stuttgart.de> References: <4496C824.5090900@iam.uni-stuttgart.de> Message-ID: <40e64fa20606191416t3641c21cla4574809b2032054@mail.gmail.com> On 6/19/06, Nils Wagner wrote: > Hi all, > > Is there a way to extract local minima from the column vector stored in > data.mtx ? Here is another approach. Convolve the data with a simple kernel and then subract this array from your original data. kern = array([1.,2.,6.,2.,1.])/12. res = dat - convolve(dat, kern, mode='same') Now search for local minima less than -0.5. See attached figure -- Paul -------------- next part -------------- A non-text attachment was scrubbed... Name: simple_conv.png Type: image/png Size: 37234 bytes Desc: not available URL: From a.mcmorland at auckland.ac.nz Mon Jun 19 19:15:40 2006 From: a.mcmorland at auckland.ac.nz (Angus McMorland) Date: Tue, 20 Jun 2006 11:15:40 +1200 Subject: [SciPy-user] Cookbook: Interpolation of an N-D curve error In-Reply-To: <91cf711d0606190700gff610d2o361b5a4d6e902956@mail.gmail.com> References: <44937D54.5010108@auckland.ac.nz> <91cf711d0606190700gff610d2o361b5a4d6e902956@mail.gmail.com> Message-ID: <4497301C.8030506@auckland.ac.nz> Hi David et al, Thanks for your interest. David Huard wrote: > Hi Angus, > > I updated scipy and numpy this morning, and the example ran fine, > except for the import Float64 statement that has to be changed to > float64. Yep. I got that one okay - in fact neither declaration seems to be required for the example to work now on my laptop. However, some problem still remains - see below. > You may want to turn pdb on in ipython and find out what > variable is triggering the exception. I'd like to help more but I > have no idea what is going on in your case. Did you modify the > example or ran it as is? > > David > > 2006/6/16, Angus McMorland >: > > I'm getting a TypeError when trying the N-D curve cookbook example ( > http://www.scipy.org/Cookbook/Interpolation) with numpy 0.9.9.2630 > and scipy 0.5.0.1979. > > The error is: > > In [158]: tckp,u = splprep([x,y,z],s=s,k=k,nest=-1) > --------------------------------------------------------------------------- > > exceptions.TypeError Traceback (most > recent call last) > > [snip] > > /usr/lib/python2.3/site-packages/scipy/interpolate/fitpack.py in > splprep(x, w, u, ub, ue, k, task, s, t, full_output, nest, per, > quiet) 215 iwrk=_parcur_cache['iwrk'] 216 > t,c,o=_fitpack._parcur(ravel(transpose(x)),w,u,ub,ue,k,task,ipar,s,t, > --> 217 nest,wrk,iwrk,per) 218 > _parcur_cache['u']=o['u'] 219 _parcur_cache['ub']=o['ub'] > > TypeError: array cannot be safely cast to required type After a bit more poking around, I've found that the routine runs fine on my i686 debian laptop, but not on my amd64 desktop machine, both running identical svn versions of numpy (0.9.9.2631) and scipy (0.5.0.1980) in python2.3 (and python2.4 also works on the i686 machine). I attach the script I ran - identical to the example in the cookbook except for the removal of the Float64. ipdb halts on line 22, then line 217 of fitpack.py in splprep. Using ipdb to look at the variables shows me that w, u and s are somehow ill-defined, but I don't know exactly what's going on. Here's the ipdb session print-out: ipdb> whatis w ipdb> w.shape /home/amcmorl/tmp/spl.py 20 21 # find the knot points ---> 22 tckp,u = splprep([x,y,z],s=s,k=k,nest=-1) 23 24 # evaluate spline, including interpolated points /usr/lib/python2.3/site-packages/scipy/interpolate/fitpack.py in splprep() 215 iwrk=_parcur_cache['iwrk'] 216 t,c,o=_fitpack._parcur(ravel(transpose(x)),w,u,ub,ue,k,task,ipar,s,t, --> 217 nest,wrk,iwrk,per) 218 _parcur_cache['u']=o['u'] 219 _parcur_cache['ub']=o['ub'] ipdb> whatis u ipdb> u.shape /home/amcmorl/tmp/spl.py 21 # find the knot points ---> 22 tckp,u = splprep([x,y,z],s=s,k=k,nest=-1) 23 ipdb> whatis s ipdb> s WARNING: Failure executing file: In [23]: Requesting s kicks me out of the debugger altogether. Other variables seem okay: x: numpy.ndarray, shape = 3,100, dtype.type = ub: int, 0 ue: int, 1 k: int, 2 task: int, 0 ipar: bool, False t: numpy.ndarray, array([], dtype=float64) nest: int, 103 wrk: numpy.ndarray, array([], dtype=float64) iwrk: numpy.ndarray, array([], dtype=int64) per: int, 0 I hope that helps work out what's going on... Angus. -- Angus McMorland email a.mcmorland at auckland.ac.nz mobile +64-21-155-4906 PhD Student, Neurophysiology / Multiphoton & Confocal Imaging Physiology, University of Auckland phone +64-9-3737-599 x89707 Armourer, Auckland University Fencing Secretary, Fencing North Inc. -------------- next part -------------- A non-text attachment was scrubbed... Name: spl.py Type: text/x-python Size: 1059 bytes Desc: not available URL: From val at vtek.com Mon Jun 19 19:30:45 2006 From: val at vtek.com (val) Date: Mon, 19 Jun 2006 19:30:45 -0400 Subject: [SciPy-user] Extract local minima In-Reply-To: <4496C824.5090900@iam.uni-stuttgart.de> References: <4496C824.5090900@iam.uni-stuttgart.de> Message-ID: <449733A5.9090201@vtek.com> try google peak finder val Nils Wagner wrote: > Hi all, > > Is there a way to extract local minima from the column vector stored in > data.mtx ? > > Nils > > > > ------------------------------------------------------------------------ > > > ------------------------------------------------------------------------ > > %%MatrixMarket matrix array real general > % > 1 300 > -4.6449622489301511e+01 > -4.6062127252307370e+01 > -4.5679353868966672e+01 > -4.5301273423568986e+01 > -4.4927862116775223e+01 > -4.4559101279425782e+01 > -4.4194977342525981e+01 > -4.3835481711570637e+01 > -4.3480610954521246e+01 > -4.3130366881421182e+01 > -4.2784756660040451e+01 > -4.2443793085574526e+01 > -4.2107494865867892e+01 > -4.1775886895192620e+01 > -4.1449000766251210e+01 > -4.1126875195257000e+01 > -4.0809556647748160e+01 > -4.0497100039328167e+01 > -4.0189569530244121e+01 > -3.9887039569277320e+01 > -3.9589595959157784e+01 > -3.9297337291802606e+01 > -3.9010376508253287e+01 > -3.8728842884460335e+01 > -3.8452884271361441e+01 > -3.8182669927223067e+01 > -3.7918393828099688e+01 > -3.7660278769980266e+01 > -3.7408581380631333e+01 > -3.7163598333697706e+01 > -3.6925674078565777e+01 > -3.6695210613285539e+01 > -3.6472679963741925e+01 > -3.6258640245570923e+01 > -3.6053756725081904e+01 > -3.5858829804456320e+01 > -3.5674832863123854e+01 > -3.5502964418824178e+01 > -3.5344721684988635e+01 > -3.5202006975354820e+01 > -3.5077286278675643e+01 > -3.4973834092045799e+01 > -3.4896127777712259e+01 > -3.4850516392988993e+01 > -3.4846430959978662e+01 > -3.4898765567519533e+01 > -3.5033122417437475e+01 > -3.5299406789190407e+01 > -3.5817567440363277e+01 > -3.7038903442914673e+01 > -3.8553586349522860e+01 > -3.5569118255362412e+01 > -3.4295536389779997e+01 > -3.3401731650370756e+01 > -3.2681900056600426e+01 > -3.2063084188980753e+01 > -3.1510859122568224e+01 > -3.1006218922333414e+01 > -3.0537553948924810e+01 > -3.0097250532350117e+01 > -2.9680044951651208e+01 > -2.9282146521990470e+01 > -2.8900735179176131e+01 > -2.8533656660113902e+01 > -2.8179228793110269e+01 > -2.7836113591042203e+01 > -2.7503230024844083e+01 > -2.7179692885394964e+01 > -2.6864768901687864e+01 > -2.6557844586942160e+01 > -2.6258402249611976e+01 > -2.5966001807438737e+01 > -2.5680266803109369e+01 > -2.5400873516595443e+01 > -2.5127542391984008e+01 > -2.4860031220815451e+01 > -2.4598129674746396e+01 > -2.4341654887016997e+01 > -2.4090447860627641e+01 > -2.3844370532103401e+01 > -2.3603303363345706e+01 > -2.3367143363562274e+01 > -2.3135802462106032e+01 > -2.2909206175998907e+01 > -2.2687292523638870e+01 > -2.2470011149612485e+01 > -2.2257322632010251e+01 > -2.2049197951500091e+01 > -2.1845618105399826e+01 > -2.1646573855847670e+01 > -2.1452065604128766e+01 > -2.1262103387281027e+01 > -2.1076706996474119e+01 > -2.0895906219930925e+01 > -2.0719741216656324e+01 > -2.0548263031277749e+01 > -2.0381534263715093e+01 > -2.0219629913879491e+01 > -2.0062638426089965e+01 > -1.9910662966076327e+01 > -1.9763822973076167e+01 > -1.9622256039847155e+01 > -1.9486120190450539e+01 > -1.9355596644135392e+01 > -1.9230893180542029e+01 > -1.9112248256657189e+01 > -1.8999936074044765e+01 > -1.8894272859865396e+01 > -1.8795624718349270e+01 > -1.8704417538172322e+01 > -1.8621149629060525e+01 > -1.8546408035024676e+01 > -1.8480889881068777e+01 > -1.8425430736111458e+01 > -1.8381042953134909e+01 > -1.8348968520738261e+01 > -1.8330753567152293e+01 > -1.8328356133766324e+01 > -1.8344306835377633e+01 > -1.8381957018411573e+01 > -1.8445878744948764e+01 > -1.8542543941565182e+01 > -1.8681555263279353e+01 > -1.8878073211023985e+01 > -1.9158180162253792e+01 > -1.9572854542062096e+01 > -2.0245407071370686e+01 > -2.1648546699482239e+01 > -2.3086161722121858e+01 > -2.0376788955828815e+01 > -1.9268242921219262e+01 > -1.8531675333284362e+01 > -1.7968101420955673e+01 > -1.7506413848982401e+01 > -1.7113063032633853e+01 > -1.6769653683424849e+01 > -1.6465128659279312e+01 > -1.6192455733790155e+01 > -1.5947036260084960e+01 > -1.5725878744459752e+01 > -1.5527157132032317e+01 > -1.5349990170888244e+01 > -1.5194374266593485e+01 > -1.5061257590067084e+01 > -1.4952794785292959e+01 > -1.4872905194634257e+01 > -1.4828450985102636e+01 > -1.4831909013846637e+01 > -1.4908405059184478e+01 > -1.5119736375271314e+01 > -1.5705961912062429e+01 > -1.6266777769856269e+01 > -1.4819525909234294e+01 > -1.4157223574095735e+01 > -1.3682410400189017e+01 > -1.3296499521382330e+01 > -1.2963955207134935e+01 > -1.2667836217630670e+01 > -1.2398712436992813e+01 > -1.2150782676065074e+01 > -1.1920219292619013e+01 > -1.1704363224856021e+01 > -1.1501293573206876e+01 > -1.1309580240191838e+01 > -1.1128133479131382e+01 > -1.0956108113157820e+01 > -1.0792840244941631e+01 > -1.0637804139136620e+01 > -1.0490582110113770e+01 > -1.0350843077956592e+01 > -1.0218327080534269e+01 > -1.0092833998578236e+01 > -9.9742153490830852e+00 > -9.8623683840236129e+00 > -9.7572319829842833e+00 > -9.6587840002849443e+00 > -9.5670398500495075e+00 > -9.4820522057461663e+00 > -9.4039117668594621e+00 > -9.3327491140778367e+00 > -9.2687377434708793e+00 > -9.2120984473797822e+00 > -9.1631053039593855e+00 > -9.1220936602082912e+00 > -9.0894706615335092e+00 > -9.0657291197572860e+00 > -9.0514658608352079e+00 > -9.0474062184671933e+00 > -9.0544371494442242e+00 > -9.0736527303898491e+00 > -9.1064178932105282e+00 > -9.1544598010477518e+00 > -9.2200024893436812e+00 > -9.3059718162626073e+00 > -9.4163198426369572e+00 > -9.5565631777182976e+00 > -9.7347306340300293e+00 > -9.9631616371204057e+00 > -1.0262276694022377e+01 > -1.0669664690922730e+01 > -1.1267168635760411e+01 > -1.2299106285809405e+01 > -1.5949792915736971e+01 > -1.2644045844847344e+01 > -1.1255220540542659e+01 > -1.0458063698727774e+01 > -9.8902297925475118e+00 > -9.4466166161458229e+00 > -9.0819821065707664e+00 > -8.7727003028799473e+00 > -8.5048961554385230e+00 > -8.2697517225960429e+00 > -8.0613398389333266e+00 > -7.8755104555578184e+00 > -7.7092705697705277e+00 > -7.5604172479817278e+00 > -7.4273099330594743e+00 > -7.3087240102138908e+00 > -7.2037542425439334e+00 > -7.1117502895627993e+00 > -7.0322738858584675e+00 > -6.9650714710339168e+00 > -6.9100586251978555e+00 > -6.8673143538995589e+00 > -6.8370845214284870e+00 > -6.8197948369823305e+00 > -6.8160749892878778e+00 > -6.8267970653289280e+00 > -6.8531336626197739e+00 > -6.8966447733002045e+00 > -6.9594088326414143e+00 > -7.0442248171679607e+00 > -7.1549344224999878e+00 > -7.2969588550291782e+00 > -7.4782457387624621e+00 > -7.7110681332485704e+00 > -8.0157987175662626e+00 > -8.4300102694170871e+00 > -9.0356144132737199e+00 > -1.0077420026489470e+01 > -1.3756272344918308e+01 > -1.0437114820442538e+01 > -9.0616461294563369e+00 > -8.2775858236030206e+00 > -7.7235085212834802e+00 > -7.2944761223095265e+00 > -6.9453050485831298e+00 > -6.6524048061456318e+00 > -6.4019329912249452e+00 > -6.1851087386775969e+00 > -5.9960501027164002e+00 > -5.8306631105071185e+00 > -5.6860243085316222e+00 > -5.5600167439926347e+00 > -5.4511058856398478e+00 > -5.3581977738968432e+00 > -5.2805484015620179e+00 > -5.2177070688095517e+00 > -5.1694840454486028e+00 > -5.1359374409890028e+00 > -5.1173773060691987e+00 > -5.1143875836023813e+00 > -5.1278692862136577e+00 > -5.1591119582689533e+00 > -5.2099062504553943e+00 > -5.2827204629686273e+00 > -5.3809827197412039e+00 > -5.5095482551451465e+00 > -5.6755133587182254e+00 > -5.8897325949135642e+00 > -6.1699165666673572e+00 > -6.5478107790391453e+00 > -7.0892903175459452e+00 > -7.9719481386656224e+00 > -1.0144472427354435e+01 > -9.2619821983097932e+00 > -7.5396382092389151e+00 > -6.6587791349637619e+00 > -6.0564942611513448e+00 > -5.5967286746458091e+00 > -5.2248127993067097e+00 > -4.9132318944356115e+00 > -4.6461783818439866e+00 > -4.4137354606001145e+00 > -4.2092790060794805e+00 > -4.0281723644935932e+00 > -3.8670514998229875e+00 > -3.7234065953773410e+00 > -3.5953239335135021e+00 > -3.4813197200918466e+00 > -3.3802292932815510e+00 > -3.2911310967627063e+00 > -3.2132932639839340e+00 > -3.1461353792540945e+00 > -3.0892007234153285e+00 > -3.0421359644297996e+00 > -3.0046762828797871e+00 > -2.9766345845266873e+00 > -2.9578938918580038e+00 > -2.9484023094934937e+00 From fperez.net at gmail.com Mon Jun 19 20:20:03 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 19 Jun 2006 18:20:03 -0600 Subject: [SciPy-user] Cookbook: Interpolation of an N-D curve error In-Reply-To: <4497301C.8030506@auckland.ac.nz> References: <44937D54.5010108@auckland.ac.nz> <91cf711d0606190700gff610d2o361b5a4d6e902956@mail.gmail.com> <4497301C.8030506@auckland.ac.nz> Message-ID: > ipdb> whatis s > > ipdb> s > WARNING: Failure executing file: > > In [23]: > > Requesting s kicks me out of the debugger altogether. Not an answer to your question, but 's' is a pdb command: ipdb> help s s(tep) Execute the current line, stop at the first possible occasion (either in a function that is called or in the current function). Use print s or the shorthand p s instead, so you actually print the local variable 's' instead of executing the command 's'. Cheers, f From a.mcmorland at auckland.ac.nz Mon Jun 19 20:39:10 2006 From: a.mcmorland at auckland.ac.nz (Angus McMorland) Date: Tue, 20 Jun 2006 12:39:10 +1200 Subject: [SciPy-user] Cookbook: Interpolation of an N-D curve error In-Reply-To: References: <44937D54.5010108@auckland.ac.nz> <91cf711d0606190700gff610d2o361b5a4d6e902956@mail.gmail.com> <4497301C.8030506@auckland.ac.nz> Message-ID: <449743AE.5010501@auckland.ac.nz> Fernando Perez wrote: >> ipdb> whatis s >> >> ipdb> s >> WARNING: Failure executing file: >> >> In [23]: >> >> Requesting s kicks me out of the debugger altogether. > > Not an answer to your question, but 's' is a pdb command: Doh, of course! Thanks for that. Following on with the actual problem: ipdb> print s 3.0 So no problems there in fact. Just while I was there I tried: ipdb> print u [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] ipdb> u.shape /home/amcmorl/tmp/spl.py 21 # find the knot points ---> 22 tckp,u = splprep([x,y,z],s=s,k=k,nest=-1) 23 ipdb> print w [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] ipdb> w.shape /home/amcmorl/tmp/spl.py 20 21 # find the knot points ---> 22 tckp,u = splprep([x,y,z],s=s,k=k,nest=-1) 23 24 # evaluate spline, including interpolated points /usr/lib/python2.3/site-packages/scipy/interpolate/fitpack.py in splprep() 215 iwrk=_parcur_cache['iwrk'] 216 t,c,o=_fitpack._parcur(ravel(transpose(x)),w,u,ub,ue,k,task,ipar,s,t, --> 217 nest,wrk,iwrk,per) 218 _parcur_cache['u']=o['u'] 219 _parcur_cache['ub']=o['ub'] I.E. I can see the values, but not get the shape returned. My understanding is that u should be the incrementing parameter for the spline (and checking on the i686 (where the routine works): In [11]:u Out[11]: array([ 0. , 0.00660902, 0.0123891 , 0.02341715, 0.03388626, 0.05189959, 0.068629 , 0.08002793, 0.08344803, 0.09142408, 0.09939328, 0.10562672, 0.10838236, 0.11531264, 0.12881276, 0.13391073, 0.14557512, 0.1514497 , 0.16502928, 0.17323899, 0.18147124, 0.19263442, 0.20709241, 0.20862571, 0.21886334, 0.22479777, 0.23564814, 0.2451128 , 0.25976362, 0.26892058, 0.27990341, 0.28739167, 0.29471638, 0.3012055 , 0.31482816, 0.32418042, 0.34062476, 0.34403043, 0.35805071, 0.36502075, 0.37829313, 0.38401472, 0.3915394 , 0.39294913, 0.4043306 , 0.4156923 , 0.42120011, 0.43522407, 0.44540497, 0.46210282, 0.47038053, 0.47715903, 0.49183298, 0.50998032, 0.51845474, 0.53809475, 0.5530424 , 0.56392821, 0.56784815, 0.57119358, 0.57531501, 0.58558322, 0.59262561, 0.60861044, 0.62329971, 0.63037712, 0.64413892, 0.65586238, 0.66734774, 0.68279714, 0.69122243, 0.70602072, 0.71882921, 0.72595286, 0.7387055 , 0.74737882, 0.75263852, 0.76107314, 0.7737371 , 0.78308686, 0.78913781, 0.80321246, 0.81822862, 0.8354007 , 0.84295247, 0.84967894, 0.86107013, 0.87619586, 0.88717895, 0.89178471, 0.89884305, 0.90861747, 0.91663593, 0.92117768, 0.93464362, 0.95465968, 0.96749238, 0.97699623, 0.9824898 , 1. ]) So, it looks like the definition of u might be the problem. I see that it looks like u is correctly defined as a float (from the "0."s, although ipdb> u.dtype raises the same error as u.shape. A. -- Angus McMorland email a.mcmorland at auckland.ac.nz mobile +64-21-155-4906 PhD Student, Neurophysiology / Multiphoton & Confocal Imaging Physiology, University of Auckland phone +64-9-3737-599 x89707 Armourer, Auckland University Fencing Secretary, Fencing North Inc. From scipy at mspacek.mm.st Mon Jun 19 21:59:13 2006 From: scipy at mspacek.mm.st (Martin Spacek) Date: Mon, 19 Jun 2006 18:59:13 -0700 Subject: [SciPy-user] histogram(a, normed=True) doesn't normalize? In-Reply-To: <91cf711d0606190642r7e5cac28w1842d2b472666f45@mail.gmail.com> References: <44966841.3000005@mspacek.mm.st> <91cf711d0606190642r7e5cac28w1842d2b472666f45@mail.gmail.com> Message-ID: <44975671.2010202@mspacek.mm.st> David Huard wrote: > Yes. > > The normalization constant is not the sum of the bin, but the sum of the > product of the bin count with the bin width : > \sum_i f_i * delta_i > It seems I long ago forgot the distinction between a probability *distribution* function and a probability *density* function. I assumed they were the same thing. Thanks for the clarification. Martin From robert.kern at gmail.com Mon Jun 19 22:46:46 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 19 Jun 2006 21:46:46 -0500 Subject: [SciPy-user] histogram(a, normed=True) doesn't normalize? In-Reply-To: <44975671.2010202@mspacek.mm.st> References: <44966841.3000005@mspacek.mm.st> <91cf711d0606190642r7e5cac28w1842d2b472666f45@mail.gmail.com> <44975671.2010202@mspacek.mm.st> Message-ID: <44976196.9060409@gmail.com> Martin Spacek wrote: > David Huard wrote: > >>Yes. >> >>The normalization constant is not the sum of the bin, but the sum of the >>product of the bin count with the bin width : >>\sum_i f_i * delta_i > > It seems I long ago forgot the distinction between a probability > *distribution* function and a probability *density* function. I assumed > they were the same thing. Thanks for the clarification. No, they're the same (more or less). However, what you were asking about (bins sum to 1) is a probability *mass* function. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cdavis at staffmail.ed.ac.uk Tue Jun 20 06:56:39 2006 From: cdavis at staffmail.ed.ac.uk (Cory Davis) Date: Tue, 20 Jun 2006 11:56:39 +0100 Subject: [SciPy-user] rescuing old pickled numpy arrays In-Reply-To: <20060619200334.GA19061@arbutus.physics.mcmaster.ca> References: <4496ABE8.3040104@staffmail.ed.ac.uk> <20060619200334.GA19061@arbutus.physics.mcmaster.ca> Message-ID: <4497D467.4070907@staffmail.ed.ac.uk> Thanks for the advice David, here is an example pickle that produces the error below. I think its just a dictionary with some arrays. Cheers, Cory. David M. Cooke wrote: > On Mon, Jun 19, 2006 at 02:51:36PM +0100, Cory Davis wrote: > >>Hi All, >>this relates to a question I raised a few weeks ago about pickled arrays >>from older Numpy versions not loading. Here is what happens when I try >>to load my old data ... >> >>---> 51 return cPickle.load(infile) >> >>AttributeError: 'module' object has no attribute 'dtypedescr' >> >>Can anyone suggest an easy way to rescue this data? > > > The easy way I suppose would be to inject a 'dtypedescr' into the > appropiate module. > > Have a look at the pickletools module. pickletools.dis() will > disassemble the pickle, so you could figure out what it's looking for. > > At worst, you could use pickletools.genops() to pull out the data. > > Can you send me a copy of a pickle? I may be able to cook up an upgrade > routine. > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: results_taubar_ar3.pickle URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cdavis.vcf Type: text/x-vcard Size: 431 bytes Desc: not available URL: From grante at visi.com Tue Jun 20 14:54:47 2006 From: grante at visi.com (Grant Edwards) Date: Tue, 20 Jun 2006 18:54:47 +0000 (UTC) Subject: [SciPy-user] element-by-element max of two arrays? Message-ID: There's got to be a simple way to do this, but I can't find it. How do I get an element-by-element max or min of two arrays? For two NxM arrays A and B, I want an NxM array C where C[i,j] == max(A[i,j],B[i,j]) I had thought maybe max(A,B) would work, but it doesn't. -- Grant Edwards grante Yow! MY income is ALL at disposable! visi.com From oliphant.travis at ieee.org Tue Jun 20 14:58:54 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 20 Jun 2006 12:58:54 -0600 Subject: [SciPy-user] element-by-element max of two arrays? In-Reply-To: References: Message-ID: <4498456E.7090309@ieee.org> Grant Edwards wrote: > There's got to be a simple way to do this, but I can't find it. > > How do I get an element-by-element max or min of two arrays? > > For two NxM arrays A and B, I want an NxM array C where > > C[i,j] == max(A[i,j],B[i,j]) > > I had thought maybe max(A,B) would work, but it doesn't. > > C = numpy.maximum(A,B) From aisaac at american.edu Tue Jun 20 15:13:27 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 20 Jun 2006 15:13:27 -0400 Subject: [SciPy-user] element-by-element max of two arrays? In-Reply-To: References: Message-ID: On Tue, 20 Jun 2006, (UTC) Grant Edwards apparently wrote: > How do I get an element-by-element max or min of two > arrays? For two NxM arrays A and B, I want an NxM array > C where > C[i,j] == max(A[i,j],B[i,j]) x = rand(5,10) y = rand(5,10) z = array([x,y]) zmax = z.max(axis=0) hth, Alan Isaac From grante at visi.com Tue Jun 20 15:05:20 2006 From: grante at visi.com (Grant Edwards) Date: Tue, 20 Jun 2006 19:05:20 +0000 (UTC) Subject: [SciPy-user] element-by-element max of two arrays? References: <4498456E.7090309@ieee.org> Message-ID: On 2006-06-20, Travis Oliphant wrote: > Grant Edwards wrote: >> There's got to be a simple way to do this, but I can't find it. >> >> How do I get an element-by-element max or min of two arrays? >> >> For two NxM arrays A and B, I want an NxM array C where >> >> C[i,j] == max(A[i,j],B[i,j]) >> >> I had thought maybe max(A,B) would work, but it doesn't. > C = numpy.maximum(A,B) Well, that's a bit obvious. ;) I had just figured out that array((A,B)).max(0) did what I wanted, but that's a little opaque. -- Grant Edwards grante Yow! .. I wonder if I at ought to tell them about my visi.com PREVIOUS LIFE as a COMPLETE STRANGER? From cookedm at physics.mcmaster.ca Tue Jun 20 15:51:29 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 20 Jun 2006 15:51:29 -0400 Subject: [SciPy-user] rescuing old pickled numpy arrays In-Reply-To: <4497D467.4070907@staffmail.ed.ac.uk> References: <4496ABE8.3040104@staffmail.ed.ac.uk> <20060619200334.GA19061@arbutus.physics.mcmaster.ca> <4497D467.4070907@staffmail.ed.ac.uk> Message-ID: <20060620155129.38f76f7b@arbutus.physics.mcmaster.ca> On Tue, 20 Jun 2006 11:56:39 +0100 Cory Davis wrote: > Thanks for the advice David, > here is an example pickle that produces the error below. I think its > just a dictionary with some arrays. > Cheers, > Cory. This seems to work: import pickle from cStringIO import StringIO class OldNumpyUnpickler(pickle.Unpickler): def find_class(self, module, name): if module == 'numpy' and name == 'dtypedescr': name = 'dtype' return pickle.Unpickler.find_class(self, module, name) def loads(s): f = StringIO(s) return OldNumpyUnpickler(f).load() Your pickle is new enough that basically dtypedescr == dtype for what the unpickler needs. > David M. Cooke wrote: > > On Mon, Jun 19, 2006 at 02:51:36PM +0100, Cory Davis wrote: > > > >>Hi All, > >>this relates to a question I raised a few weeks ago about pickled arrays > >>from older Numpy versions not loading. Here is what happens when I try > >>to load my old data ... > >> > >>---> 51 return cPickle.load(infile) > >> > >>AttributeError: 'module' object has no attribute 'dtypedescr' > >> > >>Can anyone suggest an easy way to rescue this data? > > > > > > The easy way I suppose would be to inject a 'dtypedescr' into the > > appropiate module. > > > > Have a look at the pickletools module. pickletools.dis() will > > disassemble the pickle, so you could figure out what it's looking for. > > > > At worst, you could use pickletools.genops() to pull out the data. > > > > Can you send me a copy of a pickle? I may be able to cook up an upgrade > > routine. > > -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From travis at enthought.com Wed Jun 21 11:20:38 2006 From: travis at enthought.com (Travis N. Vaught) Date: Wed, 21 Jun 2006 10:20:38 -0500 Subject: [SciPy-user] SciPy 2006 Tutorials Message-ID: <449963C6.3070203@enthought.com> All, As part of this year's SciPy 2006 Conference, we've planned Coding Sprints on Monday and Tuesday (August 14-15) and a Tutorial Day Wednesday (August 16)--the normal conference presentations follow on Thursday and Friday (August 17-18). For this year at least, the Tutorials (and Sprints) are no additional charge (you're on your own for food on those days, though). With regard to Tutorial topics, we've settled on the following: "3D visualization in Python using tvtk and MayaVi" "Scientific Data Analysis and Visualization using IPython and Matplotlib." "Building Scientific Applications using the Enthought Tool Suite (Envisage, Traits, Chaco, etc.)" "NumPy (migration from Numarray & Numeric, overview of NumPy)" These will be in two tracks with two three hour sessions in each track. If you plan to attend, please send an email to tutorials at scipy.org with the two sessions you'd most like to hear and we'll build the schedule with a minimum of conflict. We'll post the schedule of the tracks on the Wiki here: http://www.scipy.org/SciPy2006/TutorialSessions Also, if you haven't registered already, the deadline for early registration is July 14. The abstract submission deadline is July 7. More information is here: http://www.scipy.org/SciPy2006 Thanks, Travis From edmondo_minisci at yahoo.it Wed Jun 21 11:30:50 2006 From: edmondo_minisci at yahoo.it (Edmondo Minisci) Date: Wed, 21 Jun 2006 17:30:50 +0200 (CEST) Subject: [SciPy-user] Unable to find COW Message-ID: <20060621153050.77431.qmail@web26707.mail.ukl.yahoo.com> Dear Sirs, I just installed the latest(?) scipy relise (0.4.9), but I'm not able to find the cow module. Could you tell me how I can get it? Best regards Edmondo Minisci Chiacchiera con i tuoi amici in tempo reale! http://it.yahoo.com/mail_it/foot/*http://it.messenger.yahoo.com From robert.kern at gmail.com Wed Jun 21 12:26:05 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 21 Jun 2006 11:26:05 -0500 Subject: [SciPy-user] Unable to find COW In-Reply-To: <20060621153050.77431.qmail@web26707.mail.ukl.yahoo.com> References: <20060621153050.77431.qmail@web26707.mail.ukl.yahoo.com> Message-ID: <4499731D.7020806@gmail.com> Edmondo Minisci wrote: > Dear Sirs, > > I just installed the latest(?) scipy relise (0.4.9), > but I'm not able to find the cow module. > > Could you tell me how I can get it? It has been moved into the sandbox subpackage since no one has volunteered to bring it up to date and maintain it. The sandbox is not built by default, so it won't be in any packaged distribution that you may have installed. You will have to get it from SVN. http://www.scipy.org/Download -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pajer at iname.com Wed Jun 21 12:31:12 2006 From: pajer at iname.com (Gary) Date: Wed, 21 Jun 2006 12:31:12 -0400 Subject: [SciPy-user] can't import scipy.interpolate Message-ID: <44997450.3040209@iname.com> >>> scipy.__core_version__ '0.8.4' >>> import scipy.interpolate Fatal Python error: can't initialize module fitpack This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. Have I done something wrong? -gary From gpajer at rider.edu Wed Jun 21 12:49:30 2006 From: gpajer at rider.edu (Gary Pajer) Date: Wed, 21 Jun 2006 12:49:30 -0400 Subject: [SciPy-user] can't import scipy.interpolate In-Reply-To: <44997450.3040209@iname.com> References: <44997450.3040209@iname.com> Message-ID: <4499789A.9050401@rider.edu> Gary wrote: > >>> scipy.__core_version__ >'0.8.4' > >>> import scipy.interpolate >Fatal Python error: can't initialize module fitpack > >This application has requested the Runtime to terminate it in an unusual >way. >Please contact the application's support team for more information. > > WinXP from binary installer, btw > >Have I done something wrong? >-gary > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > > From edmondo_minisci at yahoo.it Wed Jun 21 13:04:41 2006 From: edmondo_minisci at yahoo.it (Edmondo Minisci) Date: Wed, 21 Jun 2006 19:04:41 +0200 (CEST) Subject: [SciPy-user] Unable to find COW Message-ID: <20060621170441.54872.qmail@web26714.mail.ukl.yahoo.com> Dear All, I resolved the problem by downloading the cow files from the files from "scipy.migratedtosvn/Lib". It loks working and I amble to strat the server in the way described in the first example. After the start it requires the proc module and I downloaded from "scipy_core.migratedtosvn/scipy_distutils". I try to go further but I can only obtain what follows: Traceback (most recent call last): File "pr_cow.py", line 18, in ? cluster.info() File "/usr/lib64/python2.3/site-packages/scipy/cow/cow.py", line 470, in info results = self.info_list() File "/usr/lib64/python2.3/site-packages/scipy/cow/cow.py", line 539, in info_list res = self.apply(scipy_proc.machine_info,()) File "/usr/lib64/python2.3/site-packages/scipy/cow/cow.py", line 743, in apply return self._send_recv(package) File "/usr/lib64/python2.3/site-packages/scipy/cow/cow.py", line 347, in _send_recv self.handle_error() File "/usr/lib64/python2.3/site-packages/scipy/cow/cow.py", line 408, in handle_error raise ClusterError, msg ClusterError: The same error occured on all workers: exceptions.NameError: name 'kB' is not defined########### Traceback text from remote machine ########### Traceback (most recent call last): File "/usr/lib64/python2.3/site-packages/scipy/cow/sync_cluster.py", line 551, in handle send_msg = self.process(recv_msg) File "/usr/lib64/python2.3/site-packages/scipy/cow/sync_cluster.py", line 627, in process result= apply(command,(),task) File "/usr/lib64/python2.3/site-packages/scipy/cow/sync_cluster.py", line 651, in apply_func return apply(function,args ,keywords) File "/usr/lib64/python2.3/site-packages/scipy_distutils/proc.py", line 115, in machine_info all.update(mem_info()) File "/usr/lib64/python2.3/site-packages/scipy_distutils/proc.py", line 77, in mem_info used = eval(x[2]) File "", line 0, in ? NameError: name 'kB' is not defined ################# End remote traceback ################## Does anyone know what happens and how the problem can be solved? Best regards Edmondo Minisci Chiacchiera con i tuoi amici in tempo reale! http://it.yahoo.com/mail_it/foot/*http://it.messenger.yahoo.com From robert.kern at gmail.com Wed Jun 21 13:15:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 21 Jun 2006 12:15:23 -0500 Subject: [SciPy-user] Unable to find COW In-Reply-To: <20060621170441.54872.qmail@web26714.mail.ukl.yahoo.com> References: <20060621170441.54872.qmail@web26714.mail.ukl.yahoo.com> Message-ID: <44997EAB.3030009@gmail.com> Edmondo Minisci wrote: > Dear All, > > I resolved the problem by downloading the cow files > from the files from "scipy.migratedtosvn/Lib". > > It loks working and I amble to strat the server in the > way described in the first example. > > After the start it requires the proc module and I > downloaded from > "scipy_core.migratedtosvn/scipy_distutils". > File > "/usr/lib64/python2.3/site-packages/scipy_distutils/proc.py", > line 77, in mem_info > used = eval(x[2]) > File "", line 0, in ? > NameError: name 'kB' is not defined I looks like you have an error in the code you are trying to execute. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Wed Jun 21 13:15:28 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 21 Jun 2006 12:15:28 -0500 Subject: [SciPy-user] can't import scipy.interpolate In-Reply-To: <44997450.3040209@iname.com> References: <44997450.3040209@iname.com> Message-ID: <44997EB0.5080505@gmail.com> Gary wrote: > >>> scipy.__core_version__ > '0.8.4' > >>> import scipy.interpolate > Fatal Python error: can't initialize module fitpack > > This application has requested the Runtime to terminate it in an unusual > way. > Please contact the application's support team for more information. Hmm. That looks like an old build from when we were still calling numpy "scipy core". Probably there is a mismatch between the "scipy core" version (that's the version number you printed) and the scipy version (scipy.__version__). I recommend upgrading to the latest numpy and scipy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wegwerp at gmail.com Wed Jun 21 14:59:22 2006 From: wegwerp at gmail.com (weg werp) Date: Wed, 21 Jun 2006 20:59:22 +0200 Subject: [SciPy-user] update of constansts module Message-ID: <6f54c160606211159k57263adg16bf8321c838bef6@mail.gmail.com> Hi group, I made some small changes to my constants module: -Fixed some small typos + added a few new conversion constants -Added binary prefixes -Added a few constants that differ between the imperial and US system. I only included the ones that are still actively being used, as far as I know (= wikipedia). The default versions are always US, since the UK should now officially be a metric country .... Can someone upload this to SVN (sandbox/constants)? Cheers, BasSw -------------- next part -------------- A non-text attachment was scrubbed... Name: constants.py Type: text/x-python Size: 4715 bytes Desc: not available URL: From robert.kern at gmail.com Wed Jun 21 15:02:54 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 21 Jun 2006 14:02:54 -0500 Subject: [SciPy-user] update of constansts module In-Reply-To: <6f54c160606211159k57263adg16bf8321c838bef6@mail.gmail.com> References: <6f54c160606211159k57263adg16bf8321c838bef6@mail.gmail.com> Message-ID: <449997DE.3040006@gmail.com> weg werp wrote: > Hi group, > > I made some small changes to my constants module: > -Fixed some small typos + added a few new conversion constants > -Added binary prefixes > -Added a few constants that differ between the imperial and US system. > I only included the ones that are still actively being used, as far as > I know (= wikipedia). The default versions are always US, since the UK > should now officially be a metric country .... > > Can someone upload this to SVN (sandbox/constants)? In the future, please add a ticket to the Trac and attach files there. If you want to discuss it on the list, you can reference the URL of the ticket. http://projects.scipy.org/scipy/scipy -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pajer at iname.com Wed Jun 21 15:20:19 2006 From: pajer at iname.com (Gary) Date: Wed, 21 Jun 2006 15:20:19 -0400 Subject: [SciPy-user] can't import scipy.interpolate In-Reply-To: <44997EB0.5080505@gmail.com> References: <44997450.3040209@iname.com> <44997EB0.5080505@gmail.com> Message-ID: <44999BF3.8000207@iname.com> Robert Kern wrote: > Gary wrote: > >> >>> scipy.__core_version__ >> '0.8.4' >> >>> import scipy.interpolate >> Fatal Python error: can't initialize module fitpack >> >> This application has requested the Runtime to terminate it in an unusual >> way. >> Please contact the application's support team for more information. >> > > Hmm. That looks like an old build from when we were still calling numpy "scipy > core". Probably there is a mismatch between the "scipy core" version (that's the > version number you printed) and the scipy version (scipy.__version__). > I thought that was weird, too. I did some housecleaning over the weekend, and I must have screwed things up instead of cleaned things up. > I recommend upgrading to the latest numpy and scipy. > OK, so I got rid of all my build directories, and my numpy and scipy directories, and tried to rebuild from svn (for no particular reason, but I've done it easily in the past). Numpy went fine, but scipy did this: (I don't want to waste people's time on this ... if it's something peculiar to me, and hard to track down, I'll just go back to the official released binary installer.) C:\Documents and Settings\Gary\My Documents\python\scipy\svn_scipy>python setup. py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst fft_opt_info: fftw3_info: libraries fftw3 not find in c:\python24\lib libraries fftw3 not find in C:\ libraries fftw3 not find in c:\python24\libs fftw3 not found NOT AVAILABLE fftw2_info: libraries rfftw,fftw not find in c:\python24\lib libraries rfftw,fftw not find in C:\ libraries rfftw,fftw not find in c:\python24\libs fftw2 not found NOT AVAILABLE dfftw_info: libraries drfftw,dfftw not find in c:\python24\lib libraries drfftw,dfftw not find in C:\ libraries drfftw,dfftw not find in c:\python24\libs dfftw not found NOT AVAILABLE djbfft_info: NOT AVAILABLE NOT AVAILABLE blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not find in c:\python24\lib libraries mkl,vml,guide not find in C:\ libraries mkl,vml,guide not find in c:\python24\libs NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not find in c:\MinGW\lib\atlas libraries ptf77blas,ptcblas,atlas not find in c:\python24\lib libraries ptf77blas,ptcblas,atlas not find in C:\ libraries ptf77blas,ptcblas,atlas not find in c:\python24\libs NOT AVAILABLE atlas_blas_info: FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['c:\\MinGW\\lib\\atlas'] language = c No module named msvccompiler in numpy.distutils, trying from distutils.. Traceback (most recent call last): File "setup.py", line 50, in ? setup_package() File "setup.py", line 42, in setup_package configuration=configuration ) File "c:\python24\Lib\site-packages\numpy\distutils\core.py", line 140, in set up config = configuration() File "setup.py", line 14, in configuration config.add_subpackage('Lib') File "c:\python24\Lib\site-packages\numpy\distutils\misc_util.py", line 740, i n add_subpackage caller_level = 2) File "c:\python24\Lib\site-packages\numpy\distutils\misc_util.py", line 723, i n get_subpackage caller_level = caller_level + 1) File "c:\python24\Lib\site-packages\numpy\distutils\misc_util.py", line 670, i n _get_configuration_from_setup_py config = setup_module.configuration(*args) File ".\Lib\setup.py", line 7, in configuration config.add_subpackage('integrate') File "c:\python24\Lib\site-packages\numpy\distutils\misc_util.py", line 740, i n add_subpackage caller_level = 2) File "c:\python24\Lib\site-packages\numpy\distutils\misc_util.py", line 723, i n get_subpackage caller_level = caller_level + 1) File "c:\python24\Lib\site-packages\numpy\distutils\misc_util.py", line 670, i n _get_configuration_from_setup_py config = setup_module.configuration(*args) File "Lib\integrate\setup.py", line 11, in configuration blas_opt = get_info('blas_opt',notfound_action=2) File "c:\python24\Lib\site-packages\numpy\distutils\system_info.py", line 256, in get_info return cl().get_info(notfound_action) File "c:\python24\Lib\site-packages\numpy\distutils\system_info.py", line 397, in get_info self.calc_info() File "c:\python24\Lib\site-packages\numpy\distutils\system_info.py", line 1223 , in calc_info atlas_version = get_atlas_version(**version_info) File "c:\python24\Lib\site-packages\numpy\distutils\system_info.py", line 1085 , in get_atlas_version library_dirs=config.get('library_dirs', []), File "c:\python24\Lib\site-packages\numpy\distutils\command\config.py", line 1 01, in get_output self._check_compiler() File "c:\python24\Lib\site-packages\numpy\distutils\command\config.py", line 3 4, in _check_compiler old_config._check_compiler(self) File "c:\python24\lib\distutils\command\config.py", line 107, in _check_compil er dry_run=self.dry_run, force=1) File "c:\python24\Lib\site-packages\numpy\distutils\ccompiler.py", line 331, i n new_compiler compiler = klass(None, dry_run, force) File "c:\python24\lib\distutils\msvccompiler.py", line 211, in __init__ self.__macros = MacroExpander(self.__version) File "c:\python24\lib\distutils\msvccompiler.py", line 112, in __init__ self.load_macros(version) File "c:\python24\lib\distutils\msvccompiler.py", line 133, in load_macros raise DistutilsPlatformError, \ distutils.errors.DistutilsPlatformError: The .NET Framework SDK needs to be inst alled before building extensions for Python. From robert.kern at gmail.com Wed Jun 21 15:51:24 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 21 Jun 2006 14:51:24 -0500 Subject: [SciPy-user] can't import scipy.interpolate In-Reply-To: <44999BF3.8000207@iname.com> References: <44997450.3040209@iname.com> <44997EB0.5080505@gmail.com> <44999BF3.8000207@iname.com> Message-ID: <4499A33C.208@gmail.com> Gary wrote: > OK, so I got rid of all my build directories, and my numpy and scipy > directories, and tried to rebuild from svn (for no particular reason, > but I've done it easily in the past). Numpy went fine, but scipy did > this: (I don't want to waste people's time on this ... if it's > something peculiar to me, and hard to track down, I'll just go back to > the official released binary installer.) > > C:\Documents and Settings\Gary\My > Documents\python\scipy\svn_scipy>python setup. > py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst The --compiler=mingw32 options go on build_clib and build_ext, not build. I think it should work if it is just set on config, but I've always done the following out of paranoia: python build_src build_clib --compiler=mingw32 build_ext --compiler=mingw32 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From elcorto at gmx.net Wed Jun 21 16:04:54 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Wed, 21 Jun 2006 22:04:54 +0200 Subject: [SciPy-user] extract array elements whis where() output Message-ID: <4499A666.7020109@gmx.net> First of all: what's better (a) post this only on the numpy list (b) only on the scipy list (I think many scipy users may find the answer to questions like this one interesting) or (c) post on both. I can't extract elements from array x with a mask array of indices but x.take (Numeric style works). I'm sure that I have done such things before .... and they worked. In [58]: x Out[58]: array([0, 0, 1, 2, 3, 0, 0, 9]) In [59]: mask=where(x!=0.0)[0] In [60]: mask Out[60]: array([2, 3, 4, 7]) In [61]: x.take(mask) Out[61]: array([1, 2, 3, 9]) In [62]: x(mask) --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) /home/elcorto/ode_testdata/ TypeError: 'numpy.ndarray' object is not callable In [63]: numpy.__version__ Out[63]: '0.9.9.2612' cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From robert.kern at gmail.com Wed Jun 21 16:12:55 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 21 Jun 2006 15:12:55 -0500 Subject: [SciPy-user] extract array elements whis where() output In-Reply-To: <4499A666.7020109@gmx.net> References: <4499A666.7020109@gmx.net> Message-ID: <4499A847.8040609@gmail.com> Steve Schmerler wrote: > First of all: what's better (a) post this only on the numpy list (b) > only on the scipy list (I think many scipy users may find the answer to > questions like this one interesting) or (c) post on both. Probably (a). (c) should never be done except for single-shot announcements. > I can't extract elements from array x with a mask array of indices but > x.take (Numeric style works). I'm sure that I have done such things > before .... and they worked. > > In [58]: x > Out[58]: array([0, 0, 1, 2, 3, 0, 0, 9]) > > In [59]: mask=where(x!=0.0)[0] > > In [60]: mask > Out[60]: array([2, 3, 4, 7]) > > In [61]: x.take(mask) > Out[61]: array([1, 2, 3, 9]) > > In [62]: x(mask) > --------------------------------------------------------------------------- > exceptions.TypeError Traceback (most > recent call last) > > /home/elcorto/ode_testdata/ > > TypeError: 'numpy.ndarray' object is not callable You meant x[mask], not x(mask). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From a.u.r.e.l.i.a.n at gmx.net Wed Jun 21 16:14:34 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Wed, 21 Jun 2006 22:14:34 +0200 Subject: [SciPy-user] extract array elements whis where() output In-Reply-To: <4499A666.7020109@gmx.net> References: <4499A666.7020109@gmx.net> Message-ID: <4499A8AA.9080606@gmx.net> Hi, > In [62]: x(mask) try x[mask] Johannes From elcorto at gmx.net Wed Jun 21 19:31:56 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Thu, 22 Jun 2006 01:31:56 +0200 Subject: [SciPy-user] extract array elements whis where() output In-Reply-To: <4499A847.8040609@gmail.com> References: <4499A666.7020109@gmx.net> <4499A847.8040609@gmail.com> Message-ID: <4499D6EC.9030603@gmx.net> > > You meant x[mask], not x(mask). > Of course .... :) Thanks. cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From edmondo_minisci at yahoo.it Thu Jun 22 05:36:03 2006 From: edmondo_minisci at yahoo.it (Edmondo Minisci) Date: Thu, 22 Jun 2006 11:36:03 +0200 (CEST) Subject: [SciPy-user] Unable to find COW In-Reply-To: <44997EAB.3030009@gmail.com> Message-ID: <20060622093603.687.qmail@web26711.mail.ukl.yahoo.com> Dear Robert, I found the error is in the way proc.py reads the file "proc/meminfo". I supposes that informations are in one line, instead they are disposed in multiple lines. Now I am able to modify the file. Since it is my intention to use cow for evolutionary algorithms runs in a linux cluster system, do you know if there is any large tutorial or work I could study? Thank you very much Edmondo --- Robert Kern ha scritto: > Edmondo Minisci wrote: > > Dear All, > > > > I resolved the problem by downloading the cow > files > > from the files from "scipy.migratedtosvn/Lib". > > > > It loks working and I amble to strat the server in > the > > way described in the first example. > > > > After the start it requires the proc module and I > > downloaded from > > "scipy_core.migratedtosvn/scipy_distutils". > > > File > > > "/usr/lib64/python2.3/site-packages/scipy_distutils/proc.py", > > line 77, in mem_info > > used = eval(x[2]) > > File "", line 0, in ? > > NameError: name 'kB' is not defined > > I looks like you have an error in the code you are > trying to execute. > > -- > Robert Kern > > "I have come to believe that the whole world is an > enigma, a harmless enigma > that is made terrible by our own mad attempt to > interpret it as though it had > an underlying truth." > -- Umberto Eco > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Chiacchiera con i tuoi amici in tempo reale! http://it.yahoo.com/mail_it/foot/*http://it.messenger.yahoo.com From nwagner at iam.uni-stuttgart.de Thu Jun 22 07:59:45 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 22 Jun 2006 13:59:45 +0200 Subject: [SciPy-user] Matlab to numpy Message-ID: <449A8631.4080903@iam.uni-stuttgart.de> I try to convert a Matlab code into Numpy/Scipy What is equaivalent to np = 36 t2 =rand(np,1) r2=rand(np,1)./max(abs(sin(t2)),abs(cos(t2))) in numpy np=36 t2 = rand(np) r2 = rand(np)/... ? Nils From david.douard at logilab.fr Thu Jun 22 08:09:09 2006 From: david.douard at logilab.fr (David Douard) Date: Thu, 22 Jun 2006 14:09:09 +0200 Subject: [SciPy-user] Matlab to numpy In-Reply-To: <449A8631.4080903@iam.uni-stuttgart.de> References: <449A8631.4080903@iam.uni-stuttgart.de> Message-ID: <20060622120909.GB1032@logilab.fr> On Thu, Jun 22, 2006 at 01:59:45PM +0200, Nils Wagner wrote: > I try to convert a Matlab code into Numpy/Scipy > > What is equaivalent to > > np = 36 > t2 =rand(np,1) > r2=rand(np,1)./max(abs(sin(t2)),abs(cos(t2))) > > in numpy > > np=36 > t2 = rand(np) > r2 = rand(np)/... ? I would say, using numpy from numpy import * np = 36 t2 = rand(np) r2 = rand(np)/maximum(absolute(sin(t2)),absolute(cos(t2))) David > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- David Douard LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian : http://www.logilab.fr/formations D?veloppement logiciel sur mesure : http://www.logilab.fr/services Informatique scientifique : http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From mrmaple at gmail.com Thu Jun 22 09:37:20 2006 From: mrmaple at gmail.com (James Carroll) Date: Thu, 22 Jun 2006 09:37:20 -0400 Subject: [SciPy-user] Creating a 2D matrix with a gausian hump? Message-ID: Hi, What's a good way to create a 2d gaussian kernel with numpy? I'm trying to create a 9x9 matrix with 1.0 in the center, and 0.0 in the corners, and a gaussian distribution from the center out. Thanks, -Jim From nwagner at iam.uni-stuttgart.de Thu Jun 22 09:49:16 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 22 Jun 2006 15:49:16 +0200 Subject: [SciPy-user] Matlab to numpy In-Reply-To: <20060622120909.GB1032@logilab.fr> References: <449A8631.4080903@iam.uni-stuttgart.de> <20060622120909.GB1032@logilab.fr> Message-ID: <449A9FDC.2020702@iam.uni-stuttgart.de> David Douard wrote: > On Thu, Jun 22, 2006 at 01:59:45PM +0200, Nils Wagner wrote: > >> I try to convert a Matlab code into Numpy/Scipy >> >> What is equaivalent to >> >> np = 36 >> t2 =rand(np,1) >> r2=rand(np,1)./max(abs(sin(t2)),abs(cos(t2))) >> >> in numpy >> >> np=36 >> t2 = rand(np) >> r2 = rand(np)/... ? >> > > I would say, using numpy > > from numpy import * > np = 36 > t2 = rand(np) > r2 = rand(np)/maximum(absolute(sin(t2)),absolute(cos(t2))) > > > David > > >> Nils >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-user >> > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Thank you very much. And what is equivalent to a so-called economy size decomposition ? [Q,R] = qr(A,0) Nils From nwagner at iam.uni-stuttgart.de Thu Jun 22 09:58:24 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 22 Jun 2006 15:58:24 +0200 Subject: [SciPy-user] shape mismatch: objects cannot be broadcast to a single shape Message-ID: <449AA200.3040501@iam.uni-stuttgart.de> Hi all, >>> k = arange(0,4) >>> r = linspace(0,3,10) >>> special.jn(k,r) Traceback (most recent call last): File "", line 1, in ? ValueError: shape mismatch: objects cannot be broadcast to a single shape It would be nice if special.jn(k,r) returns a two-dimensional array corresponding to the length of k and r. How can I resolve this problem ? Nils From david.douard at logilab.fr Thu Jun 22 10:45:11 2006 From: david.douard at logilab.fr (David Douard) Date: Thu, 22 Jun 2006 16:45:11 +0200 Subject: [SciPy-user] Matlab to numpy In-Reply-To: <449A9FDC.2020702@iam.uni-stuttgart.de> References: <449A8631.4080903@iam.uni-stuttgart.de> <20060622120909.GB1032@logilab.fr> <449A9FDC.2020702@iam.uni-stuttgart.de> Message-ID: <20060622144511.GC1032@logilab.fr> On Thu, Jun 22, 2006 at 03:49:16PM +0200, Nils Wagner wrote: > David Douard wrote: > > On Thu, Jun 22, 2006 at 01:59:45PM +0200, Nils Wagner wrote: > > [snip] > > > Thank you very much. > And what is equivalent to a so-called economy size decomposition ? > > [Q,R] = qr(A,0) look at the qr function of scipy.linalg, eg. import scipy help(scipy.linalg.qr) (see $PYTHONPATH/scipy/linalg/decomp.py for more details) David > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- David Douard LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian : http://www.logilab.fr/formations D?veloppement logiciel sur mesure : http://www.logilab.fr/services Informatique scientifique : http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From richter at hephy.oeaw.ac.at Thu Jun 22 10:52:34 2006 From: richter at hephy.oeaw.ac.at (Gerald Richter) Date: Thu, 22 Jun 2006 16:52:34 +0200 Subject: [SciPy-user] Creating a 2D matrix with a gausian hump? In-Reply-To: References: Message-ID: <20060622145234.GA12417@uroboros.hephy.oeaw.ac.at> Hi Jim, I attached some code that does something like this. It has been a while since I wrote it, and I did that for a more general case. If you need the code to do some convolution, don't forget to normalize the kernel ;) (-this is why I needed the second function) - Hope you can at least grab some ideas from it... regards, Gerald. On Thu, Jun 22, 2006 at 09:37:20AM -0400, James Carroll wrote: > Hi, > > What's a good way to create a 2d gaussian kernel with numpy? > > I'm trying to create a 9x9 matrix with 1.0 in the center, and 0.0 in > the corners, and a gaussian distribution from the center out. > > Thanks, > -Jim > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- Gerald Richter From richter at hephy.oeaw.ac.at Thu Jun 22 11:04:35 2006 From: richter at hephy.oeaw.ac.at (Gerald Richter) Date: Thu, 22 Jun 2006 17:04:35 +0200 Subject: [SciPy-user] Creating a 2D matrix with a gausian hump? In-Reply-To: References: Message-ID: <20060622150435.GB12417@uroboros.hephy.oeaw.ac.at> Oh, well... the code was not attached... but this time! btw: you won't get values == 0 at the borders of your (n X m) kernel grid, since in that case you could use a kernel (n-2 X m-2). that is the reason, why the attached code worked like using the multiples of the sigmas of the gaussians as the limits of the kernel definition range. since one also provides the step sizes of the data-grid, the kernel-size may vary wrt. the multiples of the sigmas. regards, Gerald. On Thu, Jun 22, 2006 at 09:37:20AM -0400, James Carroll wrote: > Hi, > > What's a good way to create a 2d gaussian kernel with numpy? > > I'm trying to create a 9x9 matrix with 1.0 in the center, and 0.0 in > the corners, and a gaussian distribution from the center out. > > Thanks, > -Jim > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- Gerald Richter -------------- next part -------------- A non-text attachment was scrubbed... Name: gen_convkern.py Type: text/x-python Size: 4129 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Thu Jun 22 11:05:57 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 22 Jun 2006 17:05:57 +0200 Subject: [SciPy-user] Matlab to numpy In-Reply-To: <20060622144511.GC1032@logilab.fr> References: <449A8631.4080903@iam.uni-stuttgart.de> <20060622120909.GB1032@logilab.fr> <449A9FDC.2020702@iam.uni-stuttgart.de> <20060622144511.GC1032@logilab.fr> Message-ID: <449AB1D5.8070804@iam.uni-stuttgart.de> David Douard wrote: > On Thu, Jun 22, 2006 at 03:49:16PM +0200, Nils Wagner wrote: > >> David Douard wrote: >> >>> On Thu, Jun 22, 2006 at 01:59:45PM +0200, Nils Wagner wrote: >>> >>> > [snip] > >>> >>> >> Thank you very much. >> And what is equivalent to a so-called economy size decomposition ? >> >> [Q,R] = qr(A,0) >> > > look at the qr function of scipy.linalg, eg. > > import scipy > help(scipy.linalg.qr) > > (see $PYTHONPATH/scipy/linalg/decomp.py for more details) > > David > > > The economy size decomposition means that if A is an m \?imes n matrix with m > n then only the first n columns of Q are computed. linalg.qr computes a full QR decomposition. Nils >> Nils >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-user >> > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From vincefn at users.sourceforge.net Thu Jun 22 11:24:29 2006 From: vincefn at users.sourceforge.net (Favre-Nicolin Vincent) Date: Thu, 22 Jun 2006 17:24:29 +0200 Subject: [SciPy-user] Creating a 2D matrix with a gausian hump? In-Reply-To: References: Message-ID: <200606221724.29968.vincefn@users.sourceforge.net> > What's a good way to create a 2d gaussian kernel with numpy? > > I'm trying to create a 9x9 matrix with 1.0 in the center, and 0.0 in > the corners, and a gaussian distribution from the center out. from scipy import * x=arange(0,9,1,Float) x=arange(0,9,1,Float) y=x[:,NewAxis] x0,y0=4,4 fwhm=3 g=exp(-4*log(2)*((x-x0)**2+(y-y0)**2)/fwhm**2) print g If you want exactly 0 in the corners, just use g-g[0,0]. -- Vincent Favre-Nicolin Universit? Joseph Fourier http://v.favrenicolin.free.fr ObjCryst & Fox : http://objcryst.sourceforge.net From robert.kern at gmail.com Thu Jun 22 12:37:55 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 Jun 2006 11:37:55 -0500 Subject: [SciPy-user] shape mismatch: objects cannot be broadcast to a single shape In-Reply-To: <449AA200.3040501@iam.uni-stuttgart.de> References: <449AA200.3040501@iam.uni-stuttgart.de> Message-ID: <449AC763.2080009@gmail.com> Nils Wagner wrote: > Hi all, > >>>> k = arange(0,4) >>>> r = linspace(0,3,10) >>>> special.jn(k,r) > Traceback (most recent call last): > File "", line 1, in ? > ValueError: shape mismatch: objects cannot be broadcast to a single shape > > It would be nice if special.jn(k,r) returns a two-dimensional array > corresponding to the length of k and r. > > How can I resolve this problem ? Pass in the right arguments. Make k a column vector. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Thu Jun 22 12:58:48 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 22 Jun 2006 18:58:48 +0200 Subject: [SciPy-user] shape mismatch: objects cannot be broadcast to a single shape In-Reply-To: <449AC763.2080009@gmail.com> References: <449AA200.3040501@iam.uni-stuttgart.de> <449AC763.2080009@gmail.com> Message-ID: On Thu, 22 Jun 2006 11:37:55 -0500 Robert Kern wrote: > Nils Wagner wrote: >> Hi all, >> >>>>> k = arange(0,4) >>>>> r = linspace(0,3,10) >>>>> special.jn(k,r) >> Traceback (most recent call last): >> File "", line 1, in ? >> ValueError: shape mismatch: objects cannot be broadcast >>to a single shape >> >> It would be nice if special.jn(k,r) returns a >>two-dimensional array >> corresponding to the length of k and r. >> >> How can I resolve this problem ? > > Pass in the right arguments. Make k a column vector. > > -- > Robert Kern > > "I have come to believe that the whole world is an >enigma, a harmless enigma > that is made terrible by our own mad attempt to >interpret it as though it had > an underlying truth." > -- Umberto Eco > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user Thank you. Please can you run the script from scipy import * k = arange(1,10) r = reshape(linspace(0,3,10),(len(linspace(0,3,10)),1)) H = special.jn(k,r) print H[0,:] Am I missing something ? From robert.kern at gmail.com Thu Jun 22 13:21:15 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 Jun 2006 12:21:15 -0500 Subject: [SciPy-user] shape mismatch: objects cannot be broadcast to a single shape In-Reply-To: References: <449AA200.3040501@iam.uni-stuttgart.de> <449AC763.2080009@gmail.com> Message-ID: <449AD18B.20105@gmail.com> Nils Wagner wrote: > Please can you run the script No, I won't run your script. You show me the output you get and describe to me the output you were expecting and why. *Then* I might run the script if there might be problem that will differ between machines/installed versions or if I want to explore a problem that's displayed. But I'm not going to read your mind. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mrmaple at gmail.com Thu Jun 22 13:22:35 2006 From: mrmaple at gmail.com (James Carroll) Date: Thu, 22 Jun 2006 13:22:35 -0400 Subject: [SciPy-user] Creating a 2D matrix with a gausian hump? In-Reply-To: <200606221724.29968.vincefn@users.sourceforge.net> References: <200606221724.29968.vincefn@users.sourceforge.net> Message-ID: Fantastic, Thanks Favre-Nicolin and Gerald! I generalized this just a bit to: def makeGaussian(size, fwhm = 3): """ Make a square gaussian kernel. size is the length of a side of the square fwhm is full-width-half-maximum, which can be thought of as an effective radius. """ x = arange(0, size, 1, Float32) y = x[:,NewAxis] x0 = y0 = size // 2 return exp(-4*log(2) * ((x-x0)**2 + (y-y0)**2) / fwhm**2) # which suits my newbie need to spell out what fwhm is.. On 6/22/06, Favre-Nicolin Vincent wrote: > > What's a good way to create a 2d gaussian kernel with numpy? > > > > I'm trying to create a 9x9 matrix with 1.0 in the center, and 0.0 in > > the corners, and a gaussian distribution from the center out. > > from scipy import * > x=arange(0,9,1,Float) > x=arange(0,9,1,Float) > y=x[:,NewAxis] > x0,y0=4,4 > fwhm=3 > g=exp(-4*log(2)*((x-x0)**2+(y-y0)**2)/fwhm**2) > print g > > If you want exactly 0 in the corners, just use g-g[0,0]. > > -- > Vincent Favre-Nicolin > Universit? Joseph Fourier > http://v.favrenicolin.free.fr > ObjCryst & Fox : http://objcryst.sourceforge.net > > From nwagner at iam.uni-stuttgart.de Thu Jun 22 13:45:40 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 22 Jun 2006 19:45:40 +0200 Subject: [SciPy-user] shape mismatch: objects cannot be broadcast to a single shape In-Reply-To: <449AD18B.20105@gmail.com> References: <449AA200.3040501@iam.uni-stuttgart.de> <449AC763.2080009@gmail.com> <449AD18B.20105@gmail.com> Message-ID: On Thu, 22 Jun 2006 12:21:15 -0500 Robert Kern wrote: > Nils Wagner wrote: >> Please can you run the script > > No, I won't run your script. You show me the output you >get and describe to me > the output you were expecting and why. *Then* I might >run the script if there > might be problem that will differ between >machines/installed versions or if I > want to explore a problem that's displayed. But I'm not >going to read your mind. > > -- OK I didn't expect nan >>> special.jn(2,0.1) 0.0012489586587999691 >>> special.jn(2,0.01) 1.2499895833805591e-05 >>> special.jn(2,0.0001) 1.2500001157853685e-09 >>> special.jn(2,0.0000001) 1.3897845548005439e-15 >>> special.jn(2,0.0000000001) 0.0 >>> special.jn(2,0.000000000) nan >>> scipy.__version__ '0.5.0.1988' Nils > Robert Kern > > "I have come to believe that the whole world is an >enigma, a harmless enigma > that is made terrible by our own mad attempt to >interpret it as though it had > an underlying truth." > -- Umberto Eco > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From dkuhlman at rexx.com Thu Jun 22 14:00:40 2006 From: dkuhlman at rexx.com (Dave Kuhlman) Date: Thu, 22 Jun 2006 11:00:40 -0700 Subject: [SciPy-user] Update to my SciPy document Message-ID: <20060622180040.GA81762@cutter.rexx.com> I've made a few minor updates to my SciPy document. And, with the help of Nicky Van Foreest, I've also added an example to the section on stats. You can find it here: http://www.rexx.com/~dkuhlman/scipy_course_01.html Comments and suggestions are welcome. Dave -- Dave Kuhlman http://www.rexx.com/~dkuhlman From gpajer at rider.edu Thu Jun 22 14:03:22 2006 From: gpajer at rider.edu (Gary Pajer) Date: Thu, 22 Jun 2006 14:03:22 -0400 Subject: [SciPy-user] Update to my SciPy document In-Reply-To: <20060622180040.GA81762@cutter.rexx.com> References: <20060622180040.GA81762@cutter.rexx.com> Message-ID: <449ADB6A.90401@rider.edu> Dave Kuhlman wrote: >I've made a few minor updates to my SciPy document. And, with the >help of Nicky Van Foreest, I've also added an example to the >section on stats. You can find it here: > > http://www.rexx.com/~dkuhlman/scipy_course_01.html > >Comments and suggestions are welcome. > >Dave > > > Looking good! fyi: the links to PyTables seem to be broken. -gary From lists.steve at arachnedesign.net Thu Jun 22 14:21:01 2006 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Thu, 22 Jun 2006 14:21:01 -0400 Subject: [SciPy-user] Update to my SciPy document In-Reply-To: <20060622180040.GA81762@cutter.rexx.com> References: <20060622180040.GA81762@cutter.rexx.com> Message-ID: > You can find it here: > > http://www.rexx.com/~dkuhlman/scipy_course_01.html Wow .. haven't looked at it yet, but just want to thank you for putting that together! > Comments and suggestions are welcome. I'll try to check it out over the next couple of days. Thanks again, -steve From fperez.net at gmail.com Thu Jun 22 14:24:41 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 22 Jun 2006 12:24:41 -0600 Subject: [SciPy-user] Update to my SciPy document In-Reply-To: <20060622180040.GA81762@cutter.rexx.com> References: <20060622180040.GA81762@cutter.rexx.com> Message-ID: On 6/22/06, Dave Kuhlman wrote: > I've made a few minor updates to my SciPy document. And, with the > help of Nicky Van Foreest, I've also added an example to the > section on stats. You can find it here: > > http://www.rexx.com/~dkuhlman/scipy_course_01.html Great! You should add a little blurb about it on the wiki: http://scipy.org/Topical_Software If that page stays up to date, it will be a good resource to point newcomers to for finding docs, tools, etc. And thanks for the effort! Cheers, f From falted at pytables.org Thu Jun 22 15:11:34 2006 From: falted at pytables.org (Francesc Altet) Date: Thu, 22 Jun 2006 21:11:34 +0200 Subject: [SciPy-user] Update to my SciPy document Message-ID: <200606222111.36079.falted@pytables.org> Ooops, it seems like I'm subscribed to this list with older e-mail address. Retrying... ---------- Missatge transm?s ---------- Subject: Re: [SciPy-user] Update to my SciPy document Date: Dijous 22 Juny 2006 20:53 From: Francesc Altet To: scipy-user at scipy.net A Dijous 22 Juny 2006 20:00, Dave Kuhlman va escriure: > I've made a few minor updates to my SciPy document. And, with the > help of Nicky Van Foreest, I've also added an example to the > section on stats. You can find it here: > > http://www.rexx.com/~dkuhlman/scipy_course_01.html > > Comments and suggestions are welcome. Good stuff indeed. Regarding PyTables section: - From PyTables 1.3 on, NumPy (and hence SciPy) arrays are supported right out of the box in Array objects (the ones you are using). So, if you write a NumPy array, you will get a NumPy array (the same goes for Numeric and numarray). In other objects (EArray, VLArray or Table) you can make use of the 'flavor' parameter in constructors to tell PyTables: 'Hey, every time that I read from this object, please, return me an (rec)array with the appropriate flavor'. Of course, PyTables will try hard to avoid doing data copies in conversions (i.e. the array protocol is used whenever possible). - The procedure for installation can be simplified somewhat: $ tar xvzf orig/pytables-1.3.2.tar.gz $ cd pytables-1.3.2 $ python setup.py build $ sudo python setup.py install - Perhaps a nice feature of PyTables that you could document is its capability to read slices of arrays directly from disk. You can do this by providing the start, stop and step parameters to node.read() method, but it would be more appropriate (specially when dealing with multidimensional data) using the __getitem__ method that expose all the data nodes. Examples: node[1] # Get the first element of potentially multidimensional node node[1:4:3] # Get an slice node[1:4:3, 2:10:2, ..., 3] # The complete syntax for slices is supported. Cheers! -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From cookedm at physics.mcmaster.ca Thu Jun 22 16:45:55 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 22 Jun 2006 16:45:55 -0400 Subject: [SciPy-user] shape mismatch: objects cannot be broadcast to a single shape In-Reply-To: References: <449AA200.3040501@iam.uni-stuttgart.de> <449AC763.2080009@gmail.com> <449AD18B.20105@gmail.com> Message-ID: <20060622164555.6b09652d@arbutus.physics.mcmaster.ca> On Thu, 22 Jun 2006 19:45:40 +0200 "Nils Wagner" wrote: > >>> special.jn(2,0.000000000) > nan Well that's just no good. Try scipy.special.jv instead in the meantime. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From mark_everitt at msn.com Fri Jun 23 08:17:47 2006 From: mark_everitt at msn.com (Mark Everitt) Date: Fri, 23 Jun 2006 13:17:47 +0100 Subject: [SciPy-user] odeint and complex coupled differential equations Message-ID: <6686E796-5F30-4C59-945A-A633D6B202A3@msn.com> Hi everyone, I'm having a problem with odeint and complex numbers. I have a coupled differential equation: def hyper10101(c,t,d,g): dc = array(zeros(11,Complex32)) dc[0] = 1j*d[2]*c[0] + 2j*d[4]*c[0] - 1j*g[4]*c[1]*sqrt(2) dc[1] = 1j*d[2]*c[1] + 1j*d[4]*c[1] - 1j*g[4]*c[0]*sqrt(2) - 1j*g [5]*c[2]*exp(1j*(d[5]-d[0])*t) dc[2] = 1j*d[0]*c[2] + 1j*d[2]*c[2] + 1j*d[4]*c[2] - 1j*g[5]*c[1] *exp(1j*(d[0]-d[5])*t) - 1j*g[0]*c[3] dc[3] = 1j*d[2]*c[3] + 1j*d[4]*c[3] - 1j*g[0]*c[2] - 1j*g[1]*c[4] dc[4] = 1j*d[1]*c[4] + 1j*d[2]*c[4] + 1j*d[4]*c[4] - 1j*g[1]*c [3] - 1j*g[2]*c[5] dc[5] = 1j*d[1]*c[5] + 1j*d[4]*c[5] - 1j*g[2]*c[4] - 1j*g[3]*c[6]; dc[6] = 1j*d[1]*c[6] + 1j*d[3]*c[6] + 1j*d[4]*c[6] - 1j*g[3]*c [5] - 1j*g[4]*c[7] #- 1j*g[6]*c[9] dc[7] = 1j*d[1]*c[7] + 1j*d[3]*c[7] - 1j*g[4]*c[6] - 1j*g[5]*c[8] *exp(1j*(d[5]-d[0])*t) dc[8] = 1j*d[0]*c[8] + 1j*d[1]*c[8] + 1j*d[3]*c[8] - 1j*g[5]*c[7] *exp(1j*(d[0]-d[5])*t) - 1j*g[0]*c[9] dc[9] = 1j*d[1]*c[9] + 1j*d[3]*c[9] - 1j*g[0]*c[8] - 1j*g[1]*c [10]*sqrt(2) dc[10] = 2j*d[1]*c[10] + 1j*d[3]*c[10] - 1j*g[1]*c[9]*sqrt(2) return dc d = array([d1,d2,d3,d4,d5,d6]) g = array([gab,gbc,gcd,gde,gef,gfa]) And the elements of these are floats. I call odeint like this: Y = odeint(hyper10101,c10101,tt,args=(d,g)) This returns the error: TypeError: array cannot be safely cast to required type odepack.error: Result from function call is not a proper array of floats. Are complex numbers broken for this, or am I just missing something? Mark From nwagner at iam.uni-stuttgart.de Fri Jun 23 08:26:38 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 23 Jun 2006 14:26:38 +0200 Subject: [SciPy-user] odeint and complex coupled differential equations In-Reply-To: <6686E796-5F30-4C59-945A-A633D6B202A3@msn.com> References: <6686E796-5F30-4C59-945A-A633D6B202A3@msn.com> Message-ID: <449BDDFE.7050705@iam.uni-stuttgart.de> Mark Everitt wrote: > Hi everyone, > > I'm having a problem with odeint and complex numbers. I have a > coupled differential equation: > > def hyper10101(c,t,d,g): > dc = array(zeros(11,Complex32)) > dc[0] = 1j*d[2]*c[0] + 2j*d[4]*c[0] - 1j*g[4]*c[1]*sqrt(2) > dc[1] = 1j*d[2]*c[1] + 1j*d[4]*c[1] - 1j*g[4]*c[0]*sqrt(2) - 1j*g > [5]*c[2]*exp(1j*(d[5]-d[0])*t) > dc[2] = 1j*d[0]*c[2] + 1j*d[2]*c[2] + 1j*d[4]*c[2] - 1j*g[5]*c[1] > *exp(1j*(d[0]-d[5])*t) - 1j*g[0]*c[3] > dc[3] = 1j*d[2]*c[3] + 1j*d[4]*c[3] - 1j*g[0]*c[2] - 1j*g[1]*c[4] > dc[4] = 1j*d[1]*c[4] + 1j*d[2]*c[4] + 1j*d[4]*c[4] - 1j*g[1]*c > [3] - 1j*g[2]*c[5] > dc[5] = 1j*d[1]*c[5] + 1j*d[4]*c[5] - 1j*g[2]*c[4] - 1j*g[3]*c[6]; > dc[6] = 1j*d[1]*c[6] + 1j*d[3]*c[6] + 1j*d[4]*c[6] - 1j*g[3]*c > [5] - 1j*g[4]*c[7] #- 1j*g[6]*c[9] > dc[7] = 1j*d[1]*c[7] + 1j*d[3]*c[7] - 1j*g[4]*c[6] - 1j*g[5]*c[8] > *exp(1j*(d[5]-d[0])*t) > dc[8] = 1j*d[0]*c[8] + 1j*d[1]*c[8] + 1j*d[3]*c[8] - 1j*g[5]*c[7] > *exp(1j*(d[0]-d[5])*t) - 1j*g[0]*c[9] > dc[9] = 1j*d[1]*c[9] + 1j*d[3]*c[9] - 1j*g[0]*c[8] - 1j*g[1]*c > [10]*sqrt(2) > dc[10] = 2j*d[1]*c[10] + 1j*d[3]*c[10] - 1j*g[1]*c[9]*sqrt(2) > return dc > > d = array([d1,d2,d3,d4,d5,d6]) > g = array([gab,gbc,gcd,gde,gef,gfa]) > > And the elements of these are floats. I call odeint like this: > > Y = odeint(hyper10101,c10101,tt,args=(d,g)) > > This returns the error: > > TypeError: array cannot be safely cast to required type > odepack.error: Result from function call is not a proper array of > floats. > > Are complex numbers broken for this, or am I just missing something? > > Mark > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > AFAIK. you can't use odeint with complex inputs, but you can double the size of your ODE \dot{z} = f(z,t) \dot{x}+i \dot{y} = \Re{f} + i \Im{f} where i = \sqrt{-1} \dot{x} = \Re{f(z,t)} \dot{y} = \Im{f(z,t)} The initial conditions are x_0 = \Re{z_0} y_0=\Im{z_0} Nils From bart.vandereycken at cs.kuleuven.be Fri Jun 23 09:10:05 2006 From: bart.vandereycken at cs.kuleuven.be (Bart Vandereycken) Date: Fri, 23 Jun 2006 15:10:05 +0200 Subject: [SciPy-user] Matlab to numpy In-Reply-To: <449AB1D5.8070804@iam.uni-stuttgart.de> References: <449A8631.4080903@iam.uni-stuttgart.de> <20060622120909.GB1032@logilab.fr> <449A9FDC.2020702@iam.uni-stuttgart.de> <20060622144511.GC1032@logilab.fr> <449AB1D5.8070804@iam.uni-stuttgart.de> Message-ID: Nils Wagner wrote: > The economy size decomposition means that if A is an m \?imes n matrix > with m > n then only the first n columns of Q are computed. > > linalg.qr computes a full QR decomposition. > I also use Matlab's economy-size QR method quite often. So I ended up writing my own python-wrapper. The qr-method in scipy really needs more functionality. You can basically ask 3 things from a QR method: 1) The upper triangular matrix R R = qr(A) 2) R and the unitary matrix Q Q,R = qr(A) 3) R, Q and a permutation vector E, such that abs(diag(R)) is decreasing. Q,R,E = qr(A) For all those methods, an economy-size version should be written. I don't think this will demand much time, because method 2 is already available. For Matlab-users it would be convenient to use the Matlab notation (like above), but IMO the economy flag qr(A,0) is not transparent. For method 1) you just call xGEQRF and not xORGQR. An economy version is also easy. I adjusted the wrapper of decompy.py to a method decomp.qr_r (you can find it in the attachment). Method 2) needs an economy version. This means that you don't construct the whole matrix Q in the call xORGQR but only the first N columns. Method 3) needs a new wrapper to xGEQP3. This is useful when A is not of full rank. Maybe I'll try to implement this in the next days or so. Regards, Bart -------------- next part -------------- A non-text attachment was scrubbed... Name: decomp_qr.py Type: text/x-python Size: 2223 bytes Desc: not available URL: From dkuhlman at rexx.com Fri Jun 23 19:50:07 2006 From: dkuhlman at rexx.com (Dave Kuhlman) Date: Fri, 23 Jun 2006 16:50:07 -0700 Subject: [SciPy-user] Update to my SciPy document In-Reply-To: <449ADB6A.90401@rider.edu> References: <20060622180040.GA81762@cutter.rexx.com> <449ADB6A.90401@rider.edu> Message-ID: <20060623235007.GA5869@cutter.rexx.com> On Thu, Jun 22, 2006 at 02:03:22PM -0400, Gary Pajer wrote: [snip] > Looking good! fyi: the links to PyTables seem to be broken. Fixed. I hope I've caught them all. Thanks. Dave -- Dave Kuhlman http://www.rexx.com/~dkuhlman From dkuhlman at rexx.com Fri Jun 23 19:51:22 2006 From: dkuhlman at rexx.com (Dave Kuhlman) Date: Fri, 23 Jun 2006 16:51:22 -0700 Subject: [SciPy-user] Update to my SciPy document In-Reply-To: <200606222111.36079.falted@pytables.org> References: <200606222111.36079.falted@pytables.org> Message-ID: <20060623235122.GB5869@cutter.rexx.com> On Thu, Jun 22, 2006 at 09:11:34PM +0200, Francesc Altet wrote: [snip] > > - From PyTables 1.3 on, NumPy (and hence SciPy) arrays are supported right > out of the box in Array objects (the ones you are using). So, if you write a > NumPy array, you will get a NumPy array (the same goes for Numeric and > numarray). In other objects (EArray, VLArray or Table) you can make use of > the 'flavor' parameter in constructors to tell PyTables: 'Hey, every time > that I read from this object, please, return me an (rec)array with the > appropriate flavor'. Of course, PyTables will try hard to avoid doing data > copies in conversions (i.e. the array protocol is used whenever possible). > Francesc - Thanks for updating me on this. I've shamelessly copied the above paragraph (with minor edits) into my document. Thank you. (But, please let me know if that bothers you.) > - The procedure for installation can be simplified somewhat: > > $ tar xvzf orig/pytables-1.3.2.tar.gz > $ cd pytables-1.3.2 > $ python setup.py build > $ sudo python setup.py install I used: $ python setup.py build_ext --inplace instead of: $ python setup.py build because that's what the PyTables README.txt says to do. If I use "build" instead of "build_ext", does that mean that I would not need to install Pyrex? > > - Perhaps a nice feature of PyTables that you could document is its > capability to read slices of arrays directly from disk. You can do this by > providing the start, stop and step parameters to node.read() method, but it > would be more appropriate (specially when dealing with multidimensional > data) using the __getitem__ method that expose all the data nodes. Examples: > node[1] # Get the first element of potentially multidimensional node > node[1:4:3] # Get an slice > node[1:4:3, 2:10:2, ..., 3] # The complete syntax for slices is supported. > Slick. I've added several examples and a note. Thanks for the help. Dave -- Dave Kuhlman http://www.rexx.com/~dkuhlman From ckkart at hoc.net Fri Jun 23 23:02:56 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Sat, 24 Jun 2006 12:02:56 +0900 Subject: [SciPy-user] scipy build bdist_rpm Message-ID: <449CAB60.5060409@hoc.net> Hi, I tried to build a binary rpm using python setup.py bdist_rpm on scipy 0.4.9 and get the following error: building extension "scipy.fftpack._fftpack" sources Traceback (most recent call last): File "setup.py", line 50, in ? setup_package() File "setup.py", line 42, in setup_package configuration=configuration ) File "/usr/lib/python2.4/site-packages/numpy/distutils/core.py", line 170, in setup return old_setup(**new_attr) File "/usr/lib/python2.4/distutils/core.py", line 149, in setup dist.run_commands() File "/usr/lib/python2.4/distutils/dist.py", line 946, in run_commands self.run_command(cmd) File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/lib/python2.4/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/usr/lib/python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_src.py", line 87, in run self.build_sources() File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_src.py", line 106, in build_sources self.build_extension_sources(ext) File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_src.py", line 218, in build_extension_sources sources = self.f2py_sources(sources, ext) File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_src.py", line 412, in f2py_sources raise ValueError("%r missing" % (target_file,)) ValueError: '_fftpackmodule.c' missing error: Bad exit status from /home/ck/testarea/rpm/tmp/rpm-tmp.24879 (%build) RPM build errors: Bad exit status from /home/ck/testarea/rpm/tmp/rpm-tmp.24879 (%build) error: command 'rpmbuild' failed with exit status 1 A normal build however runs without problems. Thanks for any hint, Christian From robert.kern at gmail.com Fri Jun 23 23:09:27 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 23 Jun 2006 22:09:27 -0500 Subject: [SciPy-user] scipy build bdist_rpm In-Reply-To: <449CAB60.5060409@hoc.net> References: <449CAB60.5060409@hoc.net> Message-ID: <449CACE7.5050708@gmail.com> Christian Kristukat wrote: > Hi, > I tried to build a binary rpm using > > python setup.py bdist_rpm > > on scipy 0.4.9 and get the following error: > > building extension "scipy.fftpack._fftpack" sources > Traceback (most recent call last): > File "setup.py", line 50, in ? > setup_package() > File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_src.py", > line 412, in f2py_sources > raise ValueError("%r missing" % (target_file,)) > ValueError: '_fftpackmodule.c' missing > error: Bad exit status from /home/ck/testarea/rpm/tmp/rpm-tmp.24879 (%build) > > > RPM build errors: > Bad exit status from /home/ck/testarea/rpm/tmp/rpm-tmp.24879 (%build) > error: command 'rpmbuild' failed with exit status 1 > > A normal build however runs without problems. numpy.distutils adds a build_src command that scipy uses extensively. The command dependencies are sometimes no triggered correctly. Try explicitly listing the important commands: $ python setup.py build_src build_clib build_ext build bdist_rpm -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ckkart at hoc.net Fri Jun 23 23:22:11 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Sat, 24 Jun 2006 12:22:11 +0900 Subject: [SciPy-user] scipy build bdist_rpm In-Reply-To: <449CACE7.5050708@gmail.com> References: <449CAB60.5060409@hoc.net> <449CACE7.5050708@gmail.com> Message-ID: <449CAFE3.5050605@hoc.net> Robert Kern wrote: > Christian Kristukat wrote: >> Hi, >> I tried to build a binary rpm using >> >> python setup.py bdist_rpm >> >> on scipy 0.4.9 and get the following error: >> >> building extension "scipy.fftpack._fftpack" sources >> Traceback (most recent call last): >> File "setup.py", line 50, in ? >> setup_package() > >> File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_src.py", >> line 412, in f2py_sources >> raise ValueError("%r missing" % (target_file,)) >> ValueError: '_fftpackmodule.c' missing >> error: Bad exit status from /home/ck/testarea/rpm/tmp/rpm-tmp.24879 (%build) >> >> >> RPM build errors: >> Bad exit status from /home/ck/testarea/rpm/tmp/rpm-tmp.24879 (%build) >> error: command 'rpmbuild' failed with exit status 1 >> >> A normal build however runs without problems. > > numpy.distutils adds a build_src command that scipy uses extensively. The > command dependencies are sometimes no triggered correctly. Try explicitly > listing the important commands: > > $ python setup.py build_src build_clib build_ext build bdist_rpm > Thanks, but it's still the same or should I remove the 'build' directory first? Christian From robert.kern at gmail.com Fri Jun 23 23:47:14 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 23 Jun 2006 22:47:14 -0500 Subject: [SciPy-user] scipy build bdist_rpm In-Reply-To: <449CAFE3.5050605@hoc.net> References: <449CAB60.5060409@hoc.net> <449CACE7.5050708@gmail.com> <449CAFE3.5050605@hoc.net> Message-ID: <449CB5C2.6090400@gmail.com> Christian Kristukat wrote: > Robert Kern wrote: >> Christian Kristukat wrote: >>> Hi, >>> I tried to build a binary rpm using >>> >>> python setup.py bdist_rpm >>> >>> on scipy 0.4.9 and get the following error: >>> >>> building extension "scipy.fftpack._fftpack" sources >>> Traceback (most recent call last): >>> File "setup.py", line 50, in ? >>> setup_package() >>> File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_src.py", >>> line 412, in f2py_sources >>> raise ValueError("%r missing" % (target_file,)) >>> ValueError: '_fftpackmodule.c' missing >>> error: Bad exit status from /home/ck/testarea/rpm/tmp/rpm-tmp.24879 (%build) >>> >>> >>> RPM build errors: >>> Bad exit status from /home/ck/testarea/rpm/tmp/rpm-tmp.24879 (%build) >>> error: command 'rpmbuild' failed with exit status 1 >>> >>> A normal build however runs without problems. >> numpy.distutils adds a build_src command that scipy uses extensively. The >> command dependencies are sometimes no triggered correctly. Try explicitly >> listing the important commands: >> >> $ python setup.py build_src build_clib build_ext build bdist_rpm > > Thanks, but it's still the same or should I remove the 'build' directory first? No, I just tried that (and then realized that it wouldn't make a difference anyways since bdist_rpm runs the build in a separate directory). I noticed that the rpm-tmp.* file is a shell script that appears to be what is executed for the bdist_rpm. The actual command that gets executed is this: env CFLAGS="$RPM_OPT_FLAGS" python setup.py build That's probably the problem; it's missing the other commands (and since I actually had --fcompiler options on some of those commands, that's *really* annoying). I'm not too familiar with RPM builds, though, so I don't know which component is to blame; likely numpy.distutils should be properly setting up the dependency chain of commands such that "python setup.py build" does everything correctly. Please enter a ticket in the numpy Trac describing the problem: http://projects.scipy.org/scipy/numpy -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From haase at msg.ucsf.edu Sat Jun 24 01:29:27 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 23 Jun 2006 22:29:27 -0700 Subject: [SciPy-user] Matlab lsqlin equivalent: Constrained least squares Message-ID: <449CCDB7.6000002@msg.ucsf.edu> Hi, A friend of mine needs a constrained least squares solver. He says that Matlab's lsqlin http://www.mathworks.com/access/helpdesk/help/toolbox/optim/ug/lsqlin.shtml would look like it should do the trick. Is there already some function in scipy.optimize that is equivalent ? I also found a module called: mpfit http://cars9.uchicago.edu/software/python/mpfit.html Has anyone here used this ? Thanks, Sebastian Haase From robert.kern at gmail.com Sat Jun 24 01:39:59 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 24 Jun 2006 00:39:59 -0500 Subject: [SciPy-user] Matlab lsqlin equivalent: Constrained least squares In-Reply-To: <449CCDB7.6000002@msg.ucsf.edu> References: <449CCDB7.6000002@msg.ucsf.edu> Message-ID: <449CD02F.2050409@gmail.com> Sebastian Haase wrote: > Hi, > A friend of mine needs a constrained least squares solver. > He says that Matlab's lsqlin > http://www.mathworks.com/access/helpdesk/help/toolbox/optim/ug/lsqlin.shtml > would look like it should do the trick. > > Is there already some function in scipy.optimize > that is equivalent ? Depends. What kind of bounds does he actually need? Constrained Optimizers (multivariate) fmin_l_bfgs_b -- Zhu, Byrd, and Nocedal's L-BFGS-B constrained optimizer (if you use this please quote their papers -- see help) fmin_tnc -- Truncated Newton Code originally written by Stephen Nash and adapted to C by Jean-Sebastien Roy. fmin_cobyla -- Constrained Optimization BY Linear Approximation fmin_l_bfgs_b and fmin_tnc only do rectilinear min-max bounds. fmin_cobyla allows arbitrary sets of (possibly nonlinear) "f(x)>=b" bounds. The equality contraints in lsqlin can probably by transforming the problem into the solution space of Aeq*x=b. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From haase at msg.ucsf.edu Sat Jun 24 01:45:30 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 23 Jun 2006 22:45:30 -0700 Subject: [SciPy-user] Matlab lsqlin equivalent: Constrained least squares In-Reply-To: <449CD02F.2050409@gmail.com> References: <449CCDB7.6000002@msg.ucsf.edu> <449CD02F.2050409@gmail.com> Message-ID: <449CD17A.3060707@msg.ucsf.edu> Robert Kern wrote: > Sebastian Haase wrote: >> Hi, >> A friend of mine needs a constrained least squares solver. >> He says that Matlab's lsqlin >> http://www.mathworks.com/access/helpdesk/help/toolbox/optim/ug/lsqlin.shtml >> would look like it should do the trick. >> >> Is there already some function in scipy.optimize >> that is equivalent ? > > Depends. What kind of bounds does he actually need? > I think for now he is looking for a linear least square with "simple upper bound like: x = lsqlin(C,d,A,b) solves the linear system C*x=d in the least-squares sense subject to A*x<=b, where C is m-by-n. Does that make sense !? Thanks for the reply. Sebastian > Constrained Optimizers (multivariate) > > fmin_l_bfgs_b -- Zhu, Byrd, and Nocedal's L-BFGS-B constrained optimizer > (if you use this please quote their papers -- see help) > > fmin_tnc -- Truncated Newton Code originally written by Stephen Nash and > adapted to C by Jean-Sebastien Roy. > > fmin_cobyla -- Constrained Optimization BY Linear Approximation > > > fmin_l_bfgs_b and fmin_tnc only do rectilinear min-max bounds. fmin_cobyla > allows arbitrary sets of (possibly nonlinear) "f(x)>=b" bounds. > > The equality contraints in lsqlin can probably by transforming the problem into > the solution space of Aeq*x=b. > From elcorto at gmx.net Sat Jun 24 06:52:57 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Sat, 24 Jun 2006 12:52:57 +0200 Subject: [SciPy-user] Update to my SciPy document In-Reply-To: <20060622180040.GA81762@cutter.rexx.com> References: <20060622180040.GA81762@cutter.rexx.com> Message-ID: <449D1989.1020102@gmx.net> Dave Kuhlman wrote: > I've made a few minor updates to my SciPy document. And, with the > help of Nicky Van Foreest, I've also added an example to the > section on stats. You can find it here: > > http://www.rexx.com/~dkuhlman/scipy_course_01.html > > Comments and suggestions are welcome. > > Dave > Really cool thing!! (In section 8 you still have the "old" things like scipy_core, scipy.base & stuff listed ... :)) cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From stephen.walton at csun.edu Sat Jun 24 18:10:05 2006 From: stephen.walton at csun.edu (Stephen Walton) Date: Sat, 24 Jun 2006 15:10:05 -0700 Subject: [SciPy-user] scipy build bdist_rpm In-Reply-To: <449CB5C2.6090400@gmail.com> References: <449CAB60.5060409@hoc.net> <449CACE7.5050708@gmail.com> <449CAFE3.5050605@hoc.net> <449CB5C2.6090400@gmail.com> Message-ID: <449DB83D.10401@csun.edu> Robert Kern wrote > =he actual command that gets executed [by bdist-rpm] is this: > > env CFLAGS="$RPM_OPT_FLAGS" python setup.py build > > That's probably the problem; it's missing the other commands (and since I > actually had --fcompiler options on some of those commands, that's *really* > annoying). I've already put in a Trac ticket against numpy for the problem that bdist_rpm ignores config_fc, but didn't realize it would ignore all custom commands. http://projects.scipy.org/scipy/numpy/ticket/117 [By the way, a search on "My Tickets" when I'm logged into Trac doesn't cause this to pop up. Bug?] Steve Walton From robert.kern at gmail.com Sat Jun 24 18:34:08 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 24 Jun 2006 17:34:08 -0500 Subject: [SciPy-user] scipy build bdist_rpm In-Reply-To: <449DB83D.10401@csun.edu> References: <449CAB60.5060409@hoc.net> <449CACE7.5050708@gmail.com> <449CAFE3.5050605@hoc.net> <449CB5C2.6090400@gmail.com> <449DB83D.10401@csun.edu> Message-ID: <449DBDE0.30106@gmail.com> Stephen Walton wrote: > [By the way, a search on "My Tickets" when I'm logged into Trac doesn't > cause this to pop up. Bug?] No, I believe that that report shows the tickets which are assigned to you, not ones in which you reported. We have a bunch of custom reports that we use at Enthought which include "Show me the tickets I have created." I'll look into bringing them over to the Numpy and Scipy Tracs. Some day. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From imcsee at gmail.com Sat Jun 24 20:45:59 2006 From: imcsee at gmail.com (imcs ee) Date: Sun, 25 Jun 2006 08:45:59 +0800 Subject: [SciPy-user] Update to my SciPy document In-Reply-To: <449D1989.1020102@gmx.net> References: <20060622180040.GA81762@cutter.rexx.com> <449D1989.1020102@gmx.net> Message-ID: cool . In section 5 there is a "old" thing scipy.base, maybe it should be numpy :)) On 6/24/06, Steve Schmerler wrote: > Dave Kuhlman wrote: > > I've made a few minor updates to my SciPy document. And, with the > > help of Nicky Van Foreest, I've also added an example to the > > section on stats. You can find it here: > > > > http://www.rexx.com/~dkuhlman/scipy_course_01.html > > > > Comments and suggestions are welcome. > > > > Dave > > > > Really cool thing!! (In section 8 you still have the "old" things like > scipy_core, scipy.base & stuff listed ... :)) > > cheers, > steve > > -- > Random number generation is the art of producing pure gibberish as > quickly as possible. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From kwgoodman at gmail.com Sun Jun 25 00:00:02 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Sat, 24 Jun 2006 21:00:02 -0700 Subject: [SciPy-user] scipy.io.mio.savemat error Message-ID: After installing the latest numpy and scipy from svn, I get an error when I use scipy.io.mio.savemat /usr/local/lib/python2.4/site-packages/scipy/io/mio.py in savemat(filename, dict) 851 filename = filename + ".mat" 852 fid = fopen(filename,'wb') --> 853 M = not LittleEndian 854 O = 0 855 for variable in dict.keys(): NameError: global name 'LittleEndian' is not defined >> scipy.__version__ '0.5.0.1999' >> numpy.__version__ '0.9.9.2676' From robert.kern at gmail.com Sun Jun 25 00:09:04 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 24 Jun 2006 23:09:04 -0500 Subject: [SciPy-user] scipy.io.mio.savemat error In-Reply-To: References: Message-ID: <449E0C60.2040409@gmail.com> Keith Goodman wrote: > After installing the latest numpy and scipy from svn, I get an error > when I use scipy.io.mio.savemat > > /usr/local/lib/python2.4/site-packages/scipy/io/mio.py in > savemat(filename, dict) > 851 filename = filename + ".mat" > 852 fid = fopen(filename,'wb') > --> 853 M = not LittleEndian > 854 O = 0 > 855 for variable in dict.keys(): > > NameError: global name 'LittleEndian' is not defined Fixed in SVN. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kwgoodman at gmail.com Sun Jun 25 11:53:00 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Sun, 25 Jun 2006 08:53:00 -0700 Subject: [SciPy-user] scipy.io.mio.savemat error In-Reply-To: <449E0C60.2040409@gmail.com> References: <449E0C60.2040409@gmail.com> Message-ID: On 6/24/06, Robert Kern wrote: > Keith Goodman wrote: > > After installing the latest numpy and scipy from svn, I get an error > > when I use scipy.io.mio.savemat > > > > /usr/local/lib/python2.4/site-packages/scipy/io/mio.py in > > savemat(filename, dict) > > 851 filename = filename + ".mat" > > 852 fid = fopen(filename,'wb') > > --> 853 M = not LittleEndian > > 854 O = 0 > > 855 for variable in dict.keys(): > > > > NameError: global name 'LittleEndian' is not defined > > Fixed in SVN. Now I get /usr/local/lib/python2.4/site-packages/scipy/io/mio.py in savemat(filename, dict) 857 for variable in dict.keys(): 858 var = dict[variable] --> 859 if type(var) is not ArrayType: 860 continue 861 if var.dtype.char == 'S1': NameError: global name 'ArrayType' is not defined From elcorto at gmx.net Sun Jun 25 13:17:16 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Sun, 25 Jun 2006 19:17:16 +0200 Subject: [SciPy-user] odeint rtol and atol default values Message-ID: <449EC51C.5050700@gmx.net> Hi What are the default values for rtol and atol in scipy.integrate.odeint? cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From nwagner at iam.uni-stuttgart.de Sun Jun 25 13:58:32 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 25 Jun 2006 19:58:32 +0200 Subject: [SciPy-user] odeint rtol and atol default values In-Reply-To: <449EC51C.5050700@gmx.net> References: <449EC51C.5050700@gmx.net> Message-ID: On Sun, 25 Jun 2006 19:17:16 +0200 Steve Schmerler wrote: > Hi > > What are the default values for rtol and atol in >scipy.integrate.odeint? > > cheers, > steve > > -- > Random number generation is the art of producing pure >gibberish as > quickly as possible. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user You can find them in scipy/Lib/integrate/ode.py rtol=1e-6,atol=1e-12 Nils From elcorto at gmx.net Sun Jun 25 14:47:06 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Sun, 25 Jun 2006 20:47:06 +0200 Subject: [SciPy-user] odeint rtol and atol default values In-Reply-To: References: <449EC51C.5050700@gmx.net> Message-ID: <449EDA2A.6030303@gmx.net> Nils Wagner wrote: > On Sun, 25 Jun 2006 19:17:16 +0200 > Steve Schmerler wrote: >> Hi >> >> What are the default values for rtol and atol in >> scipy.integrate.odeint? >> > > You can find them in scipy/Lib/integrate/ode.py > > rtol=1e-6,atol=1e-12 > Sure? What you mention is in site-packages/scipy/integrate/ode.py where the class ode is defined. But I'm using integrate.odeint (in site-packages/scipy/integrate/odepack.py) and there is nothing mentioned about it. odeint is there called like this import _odepack [...] output = _odepack.odeint(func, y0, t, args, Dfun, col_deriv, ml, mu, full_output, rtol, atol, tcrit, h0, hmax,hmin, ixpr, mxstep, mxhnil, mxordn, mxords) BTW, does anyone have experiences with the ode class (and it's vode (or cvode?) solver) rather than odeint's lsoda? cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From sgarcia at olfac.univ-lyon1.fr Mon Jun 26 04:24:36 2006 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Mon, 26 Jun 2006 10:24:36 +0200 Subject: [SciPy-user] scipy itpython and debian Message-ID: <449F99C4.8020903@olfac.univ-lyon1.fr> Sorry if it is not the good place, I have a debian testing (etch) , I use the python-matplotlib and python-ipythonn official debian package. For numpy and scipy I build them from SVN. Everythings seems to works. I don't install the old scipy debian package. My problem is when I upgrade with apt-get updgrade. I have something like that : Param?trage de debconf (1.5.2) ... Compiling /usr/lib/python2.3/site-packages/scipy/maxentropy/examples/conditionalexample2.py ... File "/usr/lib/python2.3/site-packages/scipy/maxentropy/examples/conditionalexample2.py", line 53 samplespace_index = dict((x, i) for i, x in enumerate(samplespace)) ^ SyntaxError: invalid syntax For the moment the solution is to remove scipy and numpy and rebuild them after. 2 questions : - is there an easier way to upgrade without interaction with scipy ? - is there an unofficial debian repository for scipy and numpy while waiting for the official new scipy debian package ? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From elcorto at gmx.net Mon Jun 26 04:35:00 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Mon, 26 Jun 2006 10:35:00 +0200 Subject: [SciPy-user] scipy itpython and debian In-Reply-To: <449F99C4.8020903@olfac.univ-lyon1.fr> References: <449F99C4.8020903@olfac.univ-lyon1.fr> Message-ID: <449F9C34.3010505@gmx.net> Samuel GARCIA wrote: > Sorry if it is not the good place, > > I have a debian testing (etch) , I use the python-matplotlib and > python-ipythonn official debian package. > For numpy and scipy I build them from SVN. Everythings seems to works. I > don't install the old scipy debian package. > > My problem is when I upgrade with apt-get updgrade. I have something > like that : > > Param?trage de debconf (1.5.2) ... > Compiling > /usr/lib/python2.3/site-packages/scipy/maxentropy/examples/conditionalexample2.py > ... > File > "/usr/lib/python2.3/site-packages/scipy/maxentropy/examples/conditionalexample2.py", > line 53 > samplespace_index = dict((x, i) for i, x in enumerate(samplespace)) > ^ > SyntaxError: invalid syntax > > > For the moment the solution is to remove scipy and numpy and rebuild > them after. > > 2 questions : > - is there an easier way to upgrade without interaction with scipy ? > - is there an unofficial debian repository for scipy and numpy while > waiting for the official new scipy debian package ? > I have exactly the same problem with testing. You don't need to rebuild. What I do is mv /usr/lib/python2.3/site-packages/numpy/ /tmp/numpy mv /usr/lib/python2.3/site-packages/scipy/ /tmp/scipy apt-get [dist-]upgrade mv /tmp/numpy /usr/lib/python2.3/site-packages/ mv /tmp/scipy /usr/lib/python2.3/site-packages/ However, if someone knows a *real* solution, that would be nice. cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From robert.kern at gmail.com Mon Jun 26 04:43:43 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 26 Jun 2006 03:43:43 -0500 Subject: [SciPy-user] scipy itpython and debian In-Reply-To: <449F99C4.8020903@olfac.univ-lyon1.fr> References: <449F99C4.8020903@olfac.univ-lyon1.fr> Message-ID: <449F9E3F.6000500@gmail.com> Samuel GARCIA wrote: > Sorry if it is not the good place, > > I have a debian testing (etch) , I use the python-matplotlib and > python-ipythonn official debian package. > For numpy and scipy I build them from SVN. Everythings seems to works. I > don't install the old scipy debian package. > > My problem is when I upgrade with apt-get updgrade. I have something > like that : > > Param?trage de debconf (1.5.2) ... > Compiling > /usr/lib/python2.3/site-packages/scipy/maxentropy/examples/conditionalexample2.py > ... > File > "/usr/lib/python2.3/site-packages/scipy/maxentropy/examples/conditionalexample2.py", > line 53 > samplespace_index = dict((x, i) for i, x in enumerate(samplespace)) > ^ > SyntaxError: invalid syntax This is Python 2.4 syntax that should not be there, even in examples. We are trying to maintain 2.3 compatibility. Sorry, Ed. :-) > For the moment the solution is to remove scipy and numpy and rebuild > them after. > > 2 questions : > - is there an easier way to upgrade without interaction with scipy ? Don't put non-Debian files in Debian-managed directories like /usr/lib/. Use /usr/local/. You can make distutils install to /usr/local/lib/python/2.3/site-packages/ by adding the following lines to ~/.pydistutils.cfg: [install] prefix=/usr/local I think some recent versions of Debian have this set by default. > - is there an unofficial debian repository for scipy and numpy while > waiting for the official new scipy debian package ? Not that I know of. Andrew Straw has an Ubuntu repository with new packages. It's possible that the source packages might work for Debian with minimal modification. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gnata at obs.univ-lyon1.fr Mon Jun 26 04:44:14 2006 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Mon, 26 Jun 2006 10:44:14 +0200 Subject: [SciPy-user] scipy itpython and debian In-Reply-To: <449F9C34.3010505@gmx.net> References: <449F99C4.8020903@olfac.univ-lyon1.fr> <449F9C34.3010505@gmx.net> Message-ID: <449F9E5E.9050608@obs.univ-lyon1.fr> Hi, It looks like a python transition related bug (python2.3 -> python2.4). The only *real* "solution" is to send a bug to the debian maintainer using debian BTS. Xavier > Samuel GARCIA wrote: > >> Sorry if it is not the good place, >> >> I have a debian testing (etch) , I use the python-matplotlib and >> python-ipythonn official debian package. >> For numpy and scipy I build them from SVN. Everythings seems to works. I >> don't install the old scipy debian package. >> >> My problem is when I upgrade with apt-get updgrade. I have something >> like that : >> >> Param?trage de debconf (1.5.2) ... >> Compiling >> /usr/lib/python2.3/site-packages/scipy/maxentropy/examples/conditionalexample2.py >> ... >> File >> "/usr/lib/python2.3/site-packages/scipy/maxentropy/examples/conditionalexample2.py", >> line 53 >> samplespace_index = dict((x, i) for i, x in enumerate(samplespace)) >> ^ >> SyntaxError: invalid syntax >> >> >> For the moment the solution is to remove scipy and numpy and rebuild >> them after. >> >> 2 questions : >> - is there an easier way to upgrade without interaction with scipy ? >> - is there an unofficial debian repository for scipy and numpy while >> waiting for the official new scipy debian package ? >> >> > > I have exactly the same problem with testing. You don't need to rebuild. > What I do is > > mv /usr/lib/python2.3/site-packages/numpy/ /tmp/numpy > mv /usr/lib/python2.3/site-packages/scipy/ /tmp/scipy > > apt-get [dist-]upgrade > > mv /tmp/numpy /usr/lib/python2.3/site-packages/ > mv /tmp/scipy /usr/lib/python2.3/site-packages/ > > However, if someone knows a *real* solution, that would be nice. > > cheers, > steve > > -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles Andr? 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 28 Fax: +33 4 78 86 83 86 E-mail: gnata at obs.univ-lyon1.fr ############################################ From krewinkel at natwiss.uni-luebeck.de Mon Jun 26 06:07:32 2006 From: krewinkel at natwiss.uni-luebeck.de (Albert Krewinkel) Date: Mon, 26 Jun 2006 12:07:32 +0200 Subject: [SciPy-user] dynamic lookup/linking error related to symbol '_sprintf$LDBLStub' Message-ID: <53A8456B-198D-47CE-B35F-B08323AB51BF@natwiss.uni-luebeck.de> Hello, I'm trying to compile scipy on a G4 ibook running on OS 10.4.6. I used the instructions given on the scipy website and build with gcc verion 3.3 and fftw3. While building and installing completes without any errors, I get the following error when I try to import scipy: Python 2.4.1 (#1, Oct 22 2005, 16:20:11) [GCC 4.0.0 20041026 (Apple Computer, Inc. build 4061)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy import linsolve.umfpack -> failed: Failure linking new module: /opt/ local/lib/python2.4/site-packages/scipy/sparse/sparsetools.so: Symbol not found: _sprintf$LDBLStub Referenced from: /opt/local/lib/python2.4/site-packages/scipy/ sparse/sparsetools.so Expected in: dynamic lookup The unittests for fft etc. succeed, but fails later with a similar error message (while it tries to import _cephes). I googled on this problem but could not find any appropriate solution. What did I do wrong with my compiling? Thank you in advance Albert From stefan at sun.ac.za Mon Jun 26 06:54:18 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 26 Jun 2006 12:54:18 +0200 Subject: [SciPy-user] scipy.io.mio.savemat error In-Reply-To: References: <449E0C60.2040409@gmail.com> Message-ID: <20060626105418.GA30470@mentat.za.net> On Sun, Jun 25, 2006 at 08:53:00AM -0700, Keith Goodman wrote: > Now I get > > /usr/local/lib/python2.4/site-packages/scipy/io/mio.py in > savemat(filename, dict) > 857 for variable in dict.keys(): > 858 var = dict[variable] > --> 859 if type(var) is not ArrayType: > 860 continue > 861 if var.dtype.char == 'S1': > > NameError: global name 'ArrayType' is not defined Fixed in SVN. Cheers St?fan From srijit at yahoo.com Mon Jun 26 14:23:11 2006 From: srijit at yahoo.com (Srijit Kumar Bhadra) Date: Mon, 26 Jun 2006 19:23:11 +0100 (BST) Subject: [SciPy-user] Modelica and Python Message-ID: <20060626182311.44915.qmail@web60016.mail.yahoo.com> Hello, What are the possible options to use numpy and scipy with Modelica? Best Regards, Srijit __________________________________________________________ Yahoo! India Answers: Share what you know. Learn something new http://in.answers.yahoo.com/ From afraser at lanl.gov Mon Jun 26 16:47:39 2006 From: afraser at lanl.gov (afraser) Date: Mon, 26 Jun 2006 14:47:39 -0600 Subject: [SciPy-user] Feature request for sparse matrices Message-ID: <87r71blk44.fsf@hmm.lanl.gov> In my application there is a particular sparse matrix multiply that often causes the sparse matrix package to "Resize" the storage for the result. However, I can make a very good estimate of the size required on the basis of the two input matrices. I have modified my local version of the multiplication method matmat() to accept a key word argument that specifies the size, ie: def matmat(self, other, nnzc=None): . . . if nnzc == None: nnzc = 2*max(ptra[-1], ptrb[-1]) The modification doubles the speed of my code. How should I request/suggest that change for scipy? -- Andy Fraser From nwagner at iam.uni-stuttgart.de Tue Jun 27 04:27:17 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 27 Jun 2006 10:27:17 +0200 Subject: [SciPy-user] Mesh generator Message-ID: <44A0EBE5.20803@iam.uni-stuttgart.de> Hi all, Is there something similar available for scipy ? http://www-math.mit.edu/~persson/mesh/ Nils From cookedm at physics.mcmaster.ca Tue Jun 27 15:06:54 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 27 Jun 2006 15:06:54 -0400 Subject: [SciPy-user] odeint rtol and atol default values In-Reply-To: <449EDA2A.6030303@gmx.net> References: <449EC51C.5050700@gmx.net> <449EDA2A.6030303@gmx.net> Message-ID: <20060627150654.06bc2cc6@arbutus.physics.mcmaster.ca> On Sun, 25 Jun 2006 20:47:06 +0200 Steve Schmerler wrote: > Nils Wagner wrote: > > On Sun, 25 Jun 2006 19:17:16 +0200 > > Steve Schmerler wrote: > >> Hi > >> > >> What are the default values for rtol and atol in > >> scipy.integrate.odeint? > >> > > > > You can find them in scipy/Lib/integrate/ode.py > > > > rtol=1e-6,atol=1e-12 > > > > Sure? What you mention is in site-packages/scipy/integrate/ode.py where > the class ode is defined. But I'm using integrate.odeint (in > site-packages/scipy/integrate/odepack.py) and there is nothing mentioned > about it. odeint is there called like this > > import _odepack > > [...] > > output = _odepack.odeint(func, y0, t, args, Dfun, col_deriv, ml, mu, > full_output, rtol, atol, tcrit, h0, hmax,hmin, > ixpr, mxstep, mxhnil, mxordn, mxords) Digging through __odepack.h finds that the defaults are 1.49012e-8. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From doug-scipy at sadahome.ca Tue Jun 27 15:39:33 2006 From: doug-scipy at sadahome.ca (Doug Latornell) Date: Tue, 27 Jun 2006 12:39:33 -0700 Subject: [SciPy-user] odeint rtol and atol default values In-Reply-To: <449EDA2A.6030303@gmx.net> References: <449EC51C.5050700@gmx.net> <449EDA2A.6030303@gmx.net> Message-ID: <6279c0a40606271239o5606299euca69fa1ba9c3ded9@mail.gmail.com> Hi Steve; I've been happily using the ode class with the vode integrator for a few months now. I rewrote a model from Octave into Python/NumPy/SciPy. Agreement between the Python/ode/vode code and the Octave one was good. The model is substantially faster in SciPy than it was in Octave, but there are a lot of factors that changed (processor, OS, etc.). Regards, Doug On 6/25/06, Steve Schmerler wrote: > > Nils Wagner wrote: > > On Sun, 25 Jun 2006 19:17:16 +0200 > > Steve Schmerler wrote: > >> Hi > >> > >> What are the default values for rtol and atol in > >> scipy.integrate.odeint? > >> > > > > You can find them in scipy/Lib/integrate/ode.py > > > > rtol=1e-6,atol=1e-12 > > > > Sure? What you mention is in site-packages/scipy/integrate/ode.py where > the class ode is defined. But I'm using integrate.odeint (in > site-packages/scipy/integrate/odepack.py) and there is nothing mentioned > about it. odeint is there called like this > > import _odepack > > [...] > > output = _odepack.odeint(func, y0, t, args, Dfun, col_deriv, ml, mu, > full_output, rtol, atol, tcrit, h0, > hmax,hmin, > ixpr, mxstep, mxhnil, mxordn, mxords) > > > BTW, does anyone have experiences with the ode class (and it's vode (or > cvode?) solver) rather than odeint's lsoda? > > cheers, > steve > > > -- > Random number generation is the art of producing pure gibberish as > quickly as possible. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Jun 27 15:50:44 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Jun 2006 14:50:44 -0500 Subject: [SciPy-user] Mesh generator In-Reply-To: <44A0EBE5.20803@iam.uni-stuttgart.de> References: <44A0EBE5.20803@iam.uni-stuttgart.de> Message-ID: <44A18C14.5020605@gmail.com> Nils Wagner wrote: > Hi all, > > Is there something similar available for scipy ? > > http://www-math.mit.edu/~persson/mesh/ I started implementing the Persson's 2D algorithm but never got around to thoroughly testing it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Tue Jun 27 16:02:44 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 27 Jun 2006 22:02:44 +0200 Subject: [SciPy-user] Mesh generator In-Reply-To: <44A18C14.5020605@gmail.com> References: <44A0EBE5.20803@iam.uni-stuttgart.de> <44A18C14.5020605@gmail.com> Message-ID: On Tue, 27 Jun 2006 14:50:44 -0500 Robert Kern wrote: > Nils Wagner wrote: >> Hi all, >> >> Is there something similar available for scipy ? >> >> http://www-math.mit.edu/~persson/mesh/ > > I started implementing the Persson's 2D algorithm but >never got around to > thoroughly testing it. > Robert, Is it ready for the sandbox ? Nils > -- > Robert Kern > > "I have come to believe that the whole world is an >enigma, a harmless enigma > that is made terrible by our own mad attempt to >interpret it as though it had > an underlying truth." > -- Umberto Eco > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From robert.kern at gmail.com Tue Jun 27 16:17:40 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Jun 2006 15:17:40 -0500 Subject: [SciPy-user] Mesh generator In-Reply-To: References: <44A0EBE5.20803@iam.uni-stuttgart.de> <44A18C14.5020605@gmail.com> Message-ID: <44A19264.4030806@gmail.com> Nils Wagner wrote: > On Tue, 27 Jun 2006 14:50:44 -0500 > Robert Kern wrote: >> Nils Wagner wrote: >>> Hi all, >>> >>> Is there something similar available for scipy ? >>> >>> http://www-math.mit.edu/~persson/mesh/ >> I started implementing the Persson's 2D algorithm but >> never got around to >> thoroughly testing it. > > Robert, > > Is it ready for the sandbox ? If it were, it would probably be there already. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Tue Jun 27 16:53:49 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 27 Jun 2006 16:53:49 -0400 Subject: [SciPy-user] Mesh generator In-Reply-To: <44A19264.4030806@gmail.com> References: <44A0EBE5.20803@iam.uni-stuttgart.de> <44A18C14.5020605@gmail.com><44A19264.4030806@gmail.com> Message-ID: >>> Nils Wagner wrote: >>>> Is there something similar available for scipy ? >>>> http://www-math.mit.edu/~persson/mesh/ >> On Tue, 27 Jun 2006 14:50:44 -0500 Robert Kern >>> I started implementing the Persson's 2D algorithm but >>> never got around to thoroughly testing it. > Nils Wagner wrote: >> Is it ready for the sandbox ? On Tue, 27 Jun 2006, Robert Kern apparently wrote: > If it were, it would probably be there already. So ... it would be great if Nils could try to push it over the hump and into the sandbox, if (as I think you offered) Nils can play with the code for awhile. Cheers, Alan Isaac From robert.kern at gmail.com Tue Jun 27 17:49:45 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Jun 2006 16:49:45 -0500 Subject: [SciPy-user] Mesh generator In-Reply-To: References: <44A0EBE5.20803@iam.uni-stuttgart.de> <44A18C14.5020605@gmail.com><44A19264.4030806@gmail.com> Message-ID: <44A1A7F9.6000509@gmail.com> Alan G Isaac wrote: > So ... > it would be great if Nils could try to push it over the hump > and into the sandbox, if (as I think you offered) Nils can > play with the code for awhile. I did not so offer. It will be released when I feel comfortable releasing it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From elcorto at gmx.net Tue Jun 27 17:56:43 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 27 Jun 2006 23:56:43 +0200 Subject: [SciPy-user] odeint rtol and atol default values In-Reply-To: <20060627150654.06bc2cc6@arbutus.physics.mcmaster.ca> References: <449EC51C.5050700@gmx.net> <449EDA2A.6030303@gmx.net> <20060627150654.06bc2cc6@arbutus.physics.mcmaster.ca> Message-ID: <44A1A99B.4060002@gmx.net> David M. Cooke wrote: >>>> What are the default values for rtol and atol in >>>> scipy.integrate.odeint? >>>> > > Digging through __odepack.h finds that the defaults are 1.49012e-8. > Thanks. Maybe doc string should mention this. Not that I would care to much, but this seems a rather random value at the first glance. Is there a special reason for this val? cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From kwgoodman at gmail.com Tue Jun 27 18:08:54 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 27 Jun 2006 15:08:54 -0700 Subject: [SciPy-user] odeint rtol and atol default values In-Reply-To: <44A1A99B.4060002@gmx.net> References: <449EC51C.5050700@gmx.net> <449EDA2A.6030303@gmx.net> <20060627150654.06bc2cc6@arbutus.physics.mcmaster.ca> <44A1A99B.4060002@gmx.net> Message-ID: On 6/27/06, Steve Schmerler wrote: > David M. Cooke wrote: > > >>>> What are the default values for rtol and atol in > >>>> scipy.integrate.odeint? > >>>> > > > > > Digging through __odepack.h finds that the defaults are 1.49012e-8. > > > > Thanks. Maybe doc string should mention this. > > Not that I would care to much, but this seems a rather random value at > the first glance. Is there a special reason for this val? It looks like sqrt(eps) where eps is the machine precision. BTW, can scipy return eps? From oliphant.travis at ieee.org Tue Jun 27 18:10:44 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 27 Jun 2006 16:10:44 -0600 Subject: [SciPy-user] odeint rtol and atol default values In-Reply-To: <44A1A99B.4060002@gmx.net> References: <449EC51C.5050700@gmx.net> <449EDA2A.6030303@gmx.net> <20060627150654.06bc2cc6@arbutus.physics.mcmaster.ca> <44A1A99B.4060002@gmx.net> Message-ID: <44A1ACE4.3030108@ieee.org> Steve Schmerler wrote: > David M. Cooke wrote: > > >>>>> What are the default values for rtol and atol in >>>>> scipy.integrate.odeint? >>>>> >>>>> > > >> Digging through __odepack.h finds that the defaults are 1.49012e-8. >> >> > > Thanks. Maybe doc string should mention this. > > Not that I would care to much, but this seems a rather random value at > the first glance. Is there a special reason for this val? > > It's the sqrt(epsilon) where epsilon is the level of precision on C double for IEEE double-precision standard. -Travis From elcorto at gmx.net Tue Jun 27 18:12:57 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Wed, 28 Jun 2006 00:12:57 +0200 Subject: [SciPy-user] using (c)vode [was: odeint rtol and atol default values)] In-Reply-To: <6279c0a40606271239o5606299euca69fa1ba9c3ded9@mail.gmail.com> References: <449EC51C.5050700@gmx.net> <449EDA2A.6030303@gmx.net> <6279c0a40606271239o5606299euca69fa1ba9c3ded9@mail.gmail.com> Message-ID: <44A1AD69.8040303@gmx.net> Doug Latornell wrote: > Hi Steve; > > I've been happily using the ode class with the vode integrator for a few > months now. I rewrote a model from Octave into Python/NumPy/SciPy. > Agreement between the Python/ode/vode code and the Octave one was good. > The model is substantially faster in SciPy than it was in Octave, but > there are a lot of factors that changed (processor, OS, etc.). Did you have a special reason for chosing vode/ode over lsoda/odeint (speed, accuracy, ...)? If I'm right, Octave uses lsode (http://www.gnu.org/software/octave/doc/interpreter/Ordinary-Differential-Equations.html#Ordinary-Differential-Equations) cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From kwgoodman at gmail.com Tue Jun 27 18:14:57 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 27 Jun 2006 15:14:57 -0700 Subject: [SciPy-user] odeint rtol and atol default values In-Reply-To: References: <449EC51C.5050700@gmx.net> <449EDA2A.6030303@gmx.net> <20060627150654.06bc2cc6@arbutus.physics.mcmaster.ca> <44A1A99B.4060002@gmx.net> Message-ID: On 6/27/06, Keith Goodman wrote: > On 6/27/06, Steve Schmerler wrote: > > David M. Cooke wrote: > > > > >>>> What are the default values for rtol and atol in > > >>>> scipy.integrate.odeint? > > >>>> > > > > > > > > Digging through __odepack.h finds that the defaults are 1.49012e-8. > > > > > > > Thanks. Maybe doc string should mention this. > > > > Not that I would care to much, but this seems a rather random value at > > the first glance. Is there a special reason for this val? > > It looks like sqrt(eps) where eps is the machine precision. > > BTW, can scipy return eps? P.S. http://www.library.cornell.edu/nr/bookcpdf/c5-7.pdf From oliphant.travis at ieee.org Tue Jun 27 18:14:39 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 27 Jun 2006 16:14:39 -0600 Subject: [SciPy-user] odeint rtol and atol default values In-Reply-To: References: <449EC51C.5050700@gmx.net> <449EDA2A.6030303@gmx.net> <20060627150654.06bc2cc6@arbutus.physics.mcmaster.ca> <44A1A99B.4060002@gmx.net> Message-ID: <44A1ADCF.1040208@ieee.org> Keith Goodman wrote: > On 6/27/06, Steve Schmerler wrote: > >> David M. Cooke wrote: >> >> >>>>>> What are the default values for rtol and atol in >>>>>> scipy.integrate.odeint? >>>>>> >>>>>> >>> Digging through __odepack.h finds that the defaults are 1.49012e-8. >>> >>> >> Thanks. Maybe doc string should mention this. >> >> Not that I would care to much, but this seems a rather random value at >> the first glance. Is there a special reason for this val? >> > > It looks like sqrt(eps) where eps is the machine precision. > > BTW, can scipy return eps? > > >>> dir(numpy.finfo(float)) ['__class__', '__delattr__', '__dict__', '__doc__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__str__', '__weakref__', '_finfo_cache', '_init', '_str_eps', '_str_epsneg', '_str_max', '_str_resolution', '_str_tiny', 'dtype', 'eps', 'epsneg', 'iexp', 'machar', 'machep', 'max', 'maxexp', 'min', 'minexp', 'negep', 'nexp', 'nmant', 'precision', 'resolution', 'tiny'] Thus: numpy.finfo(float).eps There is also scipy.misc.limits which contains constants as names: dir(scipy.misc.limits) ['__all__', '__builtins__', '__doc__', '__file__', '__name__', 'double_epsilon', 'double_max', 'double_min', 'double_precision', 'double_resolution', 'double_tiny', 'finfo', 'float_', 'float_epsilon', 'float_max', 'float_min', 'float_precision', 'float_resolution', 'float_tiny', 'single', 'single_epsilon', 'single_max', 'single_min', 'single_precision', 'single_resolution', 'single_tiny'] -Travis From aisaac at american.edu Tue Jun 27 18:34:11 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 27 Jun 2006 18:34:11 -0400 Subject: [SciPy-user] Mesh generator In-Reply-To: <44A1A7F9.6000509@gmail.com> References: <44A0EBE5.20803@iam.uni-stuttgart.de> <44A18C14.5020605@gmail.com><44A19264.4030806@gmail.com> <44A1A7F9.6000509@gmail.com> Message-ID: On Tue, 27 Jun 2006, Robert Kern apparently wrote: > I did not so offer. My apologies for misconstruing your intent in announcing the existence of the code. Alan Isaac From doug-scipy at sadahome.ca Tue Jun 27 18:47:16 2006 From: doug-scipy at sadahome.ca (Doug Latornell) Date: Tue, 27 Jun 2006 15:47:16 -0700 Subject: [SciPy-user] using (c)vode [was: odeint rtol and atol default values)] In-Reply-To: <44A1AD69.8040303@gmx.net> References: <449EC51C.5050700@gmx.net> <449EDA2A.6030303@gmx.net> <6279c0a40606271239o5606299euca69fa1ba9c3ded9@mail.gmail.com> <44A1AD69.8040303@gmx.net> Message-ID: <6279c0a40606271547n64a64293p13a42807692eefcb@mail.gmail.com> I believe you're correct re: Octave and lsode. I tracked through documentation to the sources pages for vode, and the other algorithms, read them and decided that vode was well suited to my problem. Can't recall the exact issues that convinced me now, though I think its ability to adapt to stiff problems was one point. Accuracy was equivalent to Octave/lsode, but I don't have a closed form case to compare to. I'm modelling production process data with *lots* of sources of deviation. Both integrators give me acceptable predictions (for my purposes). Doug On 6/27/06, Steve Schmerler wrote: > > Doug Latornell wrote: > > Hi Steve; > > > > I've been happily using the ode class with the vode integrator for a few > > months now. I rewrote a model from Octave into Python/NumPy/SciPy. > > Agreement between the Python/ode/vode code and the Octave one was good. > > The model is substantially faster in SciPy than it was in Octave, but > > there are a lot of factors that changed (processor, OS, etc.). > > Did you have a special reason for chosing vode/ode over lsoda/odeint > (speed, accuracy, ...)? > > If I'm right, Octave uses lsode > ( > http://www.gnu.org/software/octave/doc/interpreter/Ordinary-Differential-Equations.html#Ordinary-Differential-Equations > ) > > cheers, > steve > > -- > Random number generation is the art of producing pure gibberish as > quickly as possible. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.j.ross at gmail.com Tue Jun 27 20:47:49 2006 From: alex.j.ross at gmail.com (Alexander Ross) Date: Tue, 27 Jun 2006 17:47:49 -0700 Subject: [SciPy-user] arrays with units Message-ID: <62A3F7EC-90A2-4717-8E23-A9E7138B014C@gmail.com> I'm in need of a simple way to attach units of measurement to a numpy array. I've read some of the previous discussion on the topic, and found the constants module tucked away in the sandbox. While the constants package does a good job at providing access to the various physical constants it doesn't satisfy my needs. The package Unum would require me to use object arrays, and I don't want to do that. Ideally, I'd like an extended ndarray adding a new attribute `units'. The `units' attribute would be updated by the appropriate arithmetic operations, and I guess exceptions would be raised for operations which don't make sense. Is anything like this in the works? Should I strike out on my own? Alex Ross Student Hire NOAA's National Weather Service Fairbanks, Alaska From robert.kern at gmail.com Tue Jun 27 21:05:55 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Jun 2006 20:05:55 -0500 Subject: [SciPy-user] arrays with units In-Reply-To: <62A3F7EC-90A2-4717-8E23-A9E7138B014C@gmail.com> References: <62A3F7EC-90A2-4717-8E23-A9E7138B014C@gmail.com> Message-ID: <44A1D5F3.3080706@gmail.com> Alexander Ross wrote: > I'm in need of a simple way to attach units of measurement to a numpy > array. I've read some of the previous discussion on the topic, and > found the constants module tucked away in the sandbox. While the > constants package does a good job at providing access to the various > physical constants it doesn't satisfy my needs. Which are? > The package Unum > would require me to use object arrays, and I don't want to do that. I don't think so. http://home.tiscali.be/be052320/Unum_tutorial.html#_Toc68111433 > Ideally, I'd like an extended ndarray adding a new attribute > `units'. The `units' attribute would be updated by the appropriate > arithmetic operations, and I guess exceptions would be raised for > operations which don't make sense. > > Is anything like this in the works? Should I strike out on my own? Not particularly. Most unit packages work well with arrays; instead of inheriting from ndarray, they simply hold an attribute that is assumed to be a number-like object (like an array). Some may still need to be ported from Numeric to numpy: Konrad Hinsen's Scientific.Physics.PhysicalQuantities Michael Aivazis's pyre.units Enthought's enthought.units (mostly derived from Michael's work) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nmarais at sun.ac.za Wed Jun 28 10:09:12 2006 From: nmarais at sun.ac.za (Neilen Marais) Date: Wed, 28 Jun 2006 16:09:12 +0200 Subject: [SciPy-user] numpy.testing and other test runners Message-ID: Hi I've been playing around with using other test runners together with my numpy.testing based tests. I've tried nosetest and TestGears so far. Has anyone else tried any others, and what have your experiences been? Nosetest is kinda nice since it allows you to run pdb on tests that error out, and allows you to specify tests by name from the commandline. I've been using nosetest on another project that uses only normal unittest derrived tests for a while. On NumpyTestCase derived tests it would not run at first, but I got it working by hacking a little on numpy/testing/numpytest.py: --- numpy/testing/numpytest.py.orig 2006-06-16 22:40:11.000000000 +0200 +++ numpy/testing/numpytest.py 2006-06-28 14:18:45.000000000 +0200 @@ -139,7 +139,7 @@ result.stream = _dummy_stream(save_stream) unittest.TestCase.__call__(self, result) if nof_errors != len(result.errors): - test, errstr = result.errors[-1] + test, errstr = result.errors[-1][0:2] if isinstance(errstr, tuple): errstr = str(errstr[0]) elif isinstance(errstr, str): I don't know the unittest API well enough to know if this is a bug in numpytest or in nose, but it seems to work OK. TestGears looks interesting since it can generate a unittest compatible test-suite using its own collector. Ideally you could then feed this testsuite to a GUI runner or something similar. I've only tried it quickly, but couldn't get it to work. Cheers Neilen -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From cookedm at physics.mcmaster.ca Wed Jun 28 14:36:53 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 28 Jun 2006 14:36:53 -0400 Subject: [SciPy-user] numpy.testing and other test runners In-Reply-To: References: Message-ID: <20060628143653.161db54f@arbutus.physics.mcmaster.ca> On Wed, 28 Jun 2006 16:09:12 +0200 Neilen Marais wrote: > Hi > > I've been playing around with using other test runners together with my > numpy.testing based tests. I've tried nosetest and TestGears so far. Has > anyone else tried any others, and what have your experiences been? > > Nosetest is kinda nice since it allows you to run pdb on tests that error > out, and allows you to specify tests by name from the commandline. I've > been using nosetest on another project that uses only normal unittest > derrived tests for a while. On NumpyTestCase derived tests it would not run > at first, but I got it working by hacking a little on > numpy/testing/numpytest.py: > > --- numpy/testing/numpytest.py.orig 2006-06-16 22:40:11.000000000 +0200 > +++ numpy/testing/numpytest.py 2006-06-28 14:18:45.000000000 +0200 > @@ -139,7 +139,7 @@ > result.stream = _dummy_stream(save_stream) > unittest.TestCase.__call__(self, result) > if nof_errors != len(result.errors): > - test, errstr = result.errors[-1] > + test, errstr = result.errors[-1][0:2] > if isinstance(errstr, tuple): > errstr = str(errstr[0]) > elif isinstance(errstr, str): > > I don't know the unittest API well enough to know if this is a bug in > numpytest or in nose, but it seems to work OK. I applied the patch as it won't impact numpy (it just loosens the requirement that len(result.errors) == 2 to >= 2). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Wed Jun 28 14:36:53 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 28 Jun 2006 14:36:53 -0400 Subject: [SciPy-user] numpy.testing and other test runners In-Reply-To: References: Message-ID: <20060628143653.161db54f@arbutus.physics.mcmaster.ca> On Wed, 28 Jun 2006 16:09:12 +0200 Neilen Marais wrote: > Hi > > I've been playing around with using other test runners together with my > numpy.testing based tests. I've tried nosetest and TestGears so far. Has > anyone else tried any others, and what have your experiences been? > > Nosetest is kinda nice since it allows you to run pdb on tests that error > out, and allows you to specify tests by name from the commandline. I've > been using nosetest on another project that uses only normal unittest > derrived tests for a while. On NumpyTestCase derived tests it would not run > at first, but I got it working by hacking a little on > numpy/testing/numpytest.py: > > --- numpy/testing/numpytest.py.orig 2006-06-16 22:40:11.000000000 +0200 > +++ numpy/testing/numpytest.py 2006-06-28 14:18:45.000000000 +0200 > @@ -139,7 +139,7 @@ > result.stream = _dummy_stream(save_stream) > unittest.TestCase.__call__(self, result) > if nof_errors != len(result.errors): > - test, errstr = result.errors[-1] > + test, errstr = result.errors[-1][0:2] > if isinstance(errstr, tuple): > errstr = str(errstr[0]) > elif isinstance(errstr, str): > > I don't know the unittest API well enough to know if this is a bug in > numpytest or in nose, but it seems to work OK. I applied the patch as it won't impact numpy (it just loosens the requirement that len(result.errors) == 2 to >= 2). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From bhendrix at enthought.com Wed Jun 28 15:53:29 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Wed, 28 Jun 2006 14:53:29 -0500 Subject: [SciPy-user] numpy.testing and other test runners In-Reply-To: <20060628143653.161db54f@arbutus.physics.mcmaster.ca> References: <20060628143653.161db54f@arbutus.physics.mcmaster.ca> Message-ID: <44A2DE39.5080800@enthought.com> We use a slightly customized version of Testoob for testing enthought packages. I've ran it on numpy in the past, but have temporarily disabled continuous building and testing of numpy. Bryce David M. Cooke wrote: > On Wed, 28 Jun 2006 16:09:12 +0200 > Neilen Marais wrote: > > >> Hi >> >> I've been playing around with using other test runners together with my >> numpy.testing based tests. I've tried nosetest and TestGears so far. Has >> anyone else tried any others, and what have your experiences been? >> >> Nosetest is kinda nice since it allows you to run pdb on tests that error >> out, and allows you to specify tests by name from the commandline. I've >> been using nosetest on another project that uses only normal unittest >> derrived tests for a while. On NumpyTestCase derived tests it would not run >> at first, but I got it working by hacking a little on >> numpy/testing/numpytest.py: >> >> --- numpy/testing/numpytest.py.orig 2006-06-16 22:40:11.000000000 +0200 >> +++ numpy/testing/numpytest.py 2006-06-28 14:18:45.000000000 +0200 >> @@ -139,7 +139,7 @@ >> result.stream = _dummy_stream(save_stream) >> unittest.TestCase.__call__(self, result) >> if nof_errors != len(result.errors): >> - test, errstr = result.errors[-1] >> + test, errstr = result.errors[-1][0:2] >> if isinstance(errstr, tuple): >> errstr = str(errstr[0]) >> elif isinstance(errstr, str): >> >> I don't know the unittest API well enough to know if this is a bug in >> numpytest or in nose, but it seems to work OK. >> > > I applied the patch as it won't impact numpy (it just loosens the requirement > that len(result.errors) == 2 to >= 2). > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhendrix at enthought.com Wed Jun 28 15:53:29 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Wed, 28 Jun 2006 14:53:29 -0500 Subject: [SciPy-user] numpy.testing and other test runners In-Reply-To: <20060628143653.161db54f@arbutus.physics.mcmaster.ca> References: <20060628143653.161db54f@arbutus.physics.mcmaster.ca> Message-ID: <44A2DE39.5080800@enthought.com> We use a slightly customized version of Testoob for testing enthought packages. I've ran it on numpy in the past, but have temporarily disabled continuous building and testing of numpy. Bryce David M. Cooke wrote: > On Wed, 28 Jun 2006 16:09:12 +0200 > Neilen Marais wrote: > > >> Hi >> >> I've been playing around with using other test runners together with my >> numpy.testing based tests. I've tried nosetest and TestGears so far. Has >> anyone else tried any others, and what have your experiences been? >> >> Nosetest is kinda nice since it allows you to run pdb on tests that error >> out, and allows you to specify tests by name from the commandline. I've >> been using nosetest on another project that uses only normal unittest >> derrived tests for a while. On NumpyTestCase derived tests it would not run >> at first, but I got it working by hacking a little on >> numpy/testing/numpytest.py: >> >> --- numpy/testing/numpytest.py.orig 2006-06-16 22:40:11.000000000 +0200 >> +++ numpy/testing/numpytest.py 2006-06-28 14:18:45.000000000 +0200 >> @@ -139,7 +139,7 @@ >> result.stream = _dummy_stream(save_stream) >> unittest.TestCase.__call__(self, result) >> if nof_errors != len(result.errors): >> - test, errstr = result.errors[-1] >> + test, errstr = result.errors[-1][0:2] >> if isinstance(errstr, tuple): >> errstr = str(errstr[0]) >> elif isinstance(errstr, str): >> >> I don't know the unittest API well enough to know if this is a bug in >> numpytest or in nose, but it seems to work OK. >> > > I applied the patch as it won't impact numpy (it just loosens the requirement > that len(result.errors) == 2 to >= 2). > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmaloney at kitp.ucsb.edu Wed Jun 28 18:49:09 2006 From: cmaloney at kitp.ucsb.edu (Craig Maloney) Date: Wed, 28 Jun 2006 15:49:09 -0700 Subject: [SciPy-user] numpy install fails Message-ID: <900EDCF5-AB2A-4ED3-9A67-51985C04595E@kitp.ucsb.edu> Hi all. Have a numpy installation problem. Wondering if anyone has seen something similar. I'm installing with "python setup.py install --prefix=$HOME/usr/local > setup.out" There are some messages during the install about not finding Atlas, fine... no Atlas. I understand it has it's own (potentially slow) built-in BLAS/LAPACK. But when I try to import numpy, I get: ------------------------------------------------- >>> import numpy import linalg -> failed: /dat/o1/cmaloney/usr/local/lib/python2.3/ site-packages/numpy/linalg/lapack_lite.so: undefined symbol: __mth_i_dsqrt ---------------------------------------------- If I do an ldd on lapack_lite.so, it looks like all dependencies are met. Anyone know where __mth_i_dsqrt is located, and why the build didn't complain about the unresolved symbol? Thanks, Craig PS This is on a slackware box (on which I'm not admin). Same install procedure works fine on my mac with fink:python-2.3. From oliphant at ee.byu.edu Wed Jun 28 21:26:47 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 28 Jun 2006 19:26:47 -0600 Subject: [SciPy-user] Anybody used weave.blitz to speed NumPy up? Message-ID: <44A32C57.1080901@ee.byu.edu> Hello. I've been playing around with weave and NumPy. I have not been able to get weave.blitz to speed up any expression. Has anybody had success at doing this with new NumPy. Did something change to make blitz run more slowly? Try this test: a = ones((512,512), float64) b = ones((512,512), float64) expr = "a[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1]" \ "+ b[1:-1,2:] + b[1:-1,:-2]) / 5." weave.blitz(expr) start = time.time(); exec(expr); stop = time.time(); print stop-start, "secs." start = time.time(); weave.blitz(expr); stop = time.time(); print stop-start, "secs." On the two systems I've tried the first is about 50% faster than the second. From fperez.net at gmail.com Thu Jun 29 01:09:48 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 28 Jun 2006 23:09:48 -0600 Subject: [SciPy-user] Anybody used weave.blitz to speed NumPy up? In-Reply-To: <44A32C57.1080901@ee.byu.edu> References: <44A32C57.1080901@ee.byu.edu> Message-ID: On 6/28/06, Travis Oliphant wrote: > > Hello. > > I've been playing around with weave and NumPy. I have not been able to > get weave.blitz to speed up any expression. Has anybody had success at > doing this with new NumPy. Try the attached script, which JDH wrote for one of our workshops last year, and which I cleaned up for current numpy/weave for the April one. Run it /twice/, since the first time the compilation overhead will swamp the weave.blitz results. I've attached the result on my machine for the second run, which do show an improvement (and a similar one to what I recall from last summer, when John and I ran this using Numeric/Old Scipy). Cheers, f -------------- next part -------------- A non-text attachment was scrubbed... Name: weave_blitz.py Type: text/x-python Size: 1672 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: weave_blitz_comp_sm.png Type: image/png Size: 51290 bytes Desc: not available URL: From oliphant.travis at ieee.org Thu Jun 29 01:46:28 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 28 Jun 2006 23:46:28 -0600 Subject: [SciPy-user] Anybody used weave.blitz to speed NumPy up? In-Reply-To: References: <44A32C57.1080901@ee.byu.edu> Message-ID: <44A36934.80300@ieee.org> Fernando Perez wrote: > On 6/28/06, Travis Oliphant wrote: >> >> Hello. >> >> I've been playing around with weave and NumPy. I have not been able to >> get weave.blitz to speed up any expression. Has anybody had success at >> doing this with new NumPy. > > Try the attached script, which JDH wrote for one of our workshops last > year, and which I cleaned up for current numpy/weave for the April > one. Run it /twice/, since the first time the compilation overhead > will swamp the weave.blitz results. I've attached the result on my > machine for the second run, which do show an improvement (and a > similar one to what I recall from last summer, when John and I ran > this using Numeric/Old Scipy). Thanks Fernando. This is what I'm getting when running this script. I don't understand what is going on? The weave.blitz version is not helping at all. -Travis -------------- next part -------------- A non-text attachment was scrubbed... Name: weave_numpy.png Type: image/png Size: 18246 bytes Desc: not available URL: From oliphant.travis at ieee.org Thu Jun 29 01:49:53 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 28 Jun 2006 23:49:53 -0600 Subject: [SciPy-user] Anybody used weave.blitz to speed NumPy up? In-Reply-To: References: <44A32C57.1080901@ee.byu.edu> Message-ID: <44A36A01.5040500@ieee.org> Fernando Perez wrote: > On 6/28/06, Travis Oliphant wrote: >> >> Hello. >> >> I've been playing around with weave and NumPy. I have not been able to >> get weave.blitz to speed up any expression. Has anybody had success at >> doing this with new NumPy. > > Try the attached script, which JDH wrote for one of our workshops last > year, and which I cleaned up for current numpy/weave for the April > one. Run it /twice/, since the first time the compilation overhead > will swamp the weave.blitz results. I've attached the result on my > machine for the second run, which do show an improvement (and a > similar one to what I recall from last summer, when John and I ran > this using Numeric/Old Scipy). My results seem to indicate a problem either with my installation or with current weave.blitz. Could somebody else running NumPy and SciPy with weave also run the script. Fernando, could you send the source code of the function that was created and compiled by weave. Thanks, -Travis From oliphant.travis at ieee.org Thu Jun 29 01:55:42 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 28 Jun 2006 23:55:42 -0600 Subject: [SciPy-user] Anybody used weave.blitz to speed NumPy up? In-Reply-To: <44A36934.80300@ieee.org> References: <44A32C57.1080901@ee.byu.edu> <44A36934.80300@ieee.org> Message-ID: <44A36B5E.20604@ieee.org> Travis Oliphant wrote: > Fernando Perez wrote: >> On 6/28/06, Travis Oliphant wrote: >>> >>> Hello. >>> >>> I've been playing around with weave and NumPy. I have not been able to >>> get weave.blitz to speed up any expression. Has anybody had success at >>> doing this with new NumPy. >> >> Try the attached script, which JDH wrote for one of our workshops last >> year, and which I cleaned up for current numpy/weave for the April >> one. Run it /twice/, since the first time the compilation overhead >> will swamp the weave.blitz results. I've attached the result on my >> machine for the second run, which do show an improvement (and a >> similar one to what I recall from last summer, when John and I ran >> this using Numeric/Old Scipy). > Thanks Fernando. > > This is what I'm getting when running this script. I don't > understand what is going on? The weave.blitz version is not helping > at all. There are two runs shown (the first run where compilation occurred is not shown). So, these results are not due to compilation time... I'm use g++ 3.4.1 to compile the .cpp code. -Travis From fperez.net at gmail.com Thu Jun 29 02:20:59 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 29 Jun 2006 00:20:59 -0600 Subject: [SciPy-user] Anybody used weave.blitz to speed NumPy up? In-Reply-To: <44A36A01.5040500@ieee.org> References: <44A32C57.1080901@ee.byu.edu> <44A36A01.5040500@ieee.org> Message-ID: On 6/28/06, Travis Oliphant wrote: > Fernando, could you send the source code of the function that was > created and compiled by weave. Here it goes, Travis. I'm using ubuntu dapper, which comes with python 2.4.3 and gcc 4.0.3. I'm testing a bit more... f -------------- next part -------------- A non-text attachment was scrubbed... Name: sc_31de76ffcfd09d0d51a74919757ea1da0.cpp Type: text/x-c++src Size: 23938 bytes Desc: not available URL: From arnd.baecker at web.de Thu Jun 29 02:29:07 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 29 Jun 2006 08:29:07 +0200 (CEST) Subject: [SciPy-user] Anybody used weave.blitz to speed NumPy up? In-Reply-To: <44A36A01.5040500@ieee.org> References: <44A32C57.1080901@ee.byu.edu> <44A36A01.5040500@ieee.org> Message-ID: On Wed, 28 Jun 2006, Travis Oliphant wrote: [...] > My results seem to indicate a problem either with my installation or > with current weave.blitz. Could somebody else running NumPy and SciPy > with weave also run the script. Results for the first and second run on the 64 Bit Opteron are attached. Look fine to me. Best, Arnd P.S.: numpy.__version__ '0.9.9.2696' scipy.__version__ '0.5.0.2012' -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy_vs_weave_64bit_run1.png Type: image/png Size: 33854 bytes Desc: URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy_vs_weave_64bit_run2.png Type: image/png Size: 35551 bytes Desc: URL: From fperez.net at gmail.com Thu Jun 29 02:34:46 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 29 Jun 2006 00:34:46 -0600 Subject: [SciPy-user] Anybody used weave.blitz to speed NumPy up? In-Reply-To: <44A36934.80300@ieee.org> References: <44A32C57.1080901@ee.byu.edu> <44A36934.80300@ieee.org> Message-ID: On 6/28/06, Travis Oliphant wrote: > This is what I'm getting when running this script. I don't understand > what is going on? The weave.blitz version is not helping at all. I'm not really sure what's going on here either. I've cleaned up the timing script to be a lot more careful, use minimum timing, reduced overheads, etc. I'm attaching that version. And I'm also attaching timings for arrays of (linear) size 300, 500 and 1000. It looks like cache effects throw the numbers around quite a bit (as usual), but what I do consistently see is blitz outperform numpy. Blitz is heavily dependent on templates expanding properly to avoid temporaries (that's its "big trick"); is there any chance the compiler is messing up here? I'm not sufficiently into C++ arcana to really know if this can even happen... Cheers, f -------------- next part -------------- A non-text attachment was scrubbed... Name: weave_blitz.py Type: text/x-python Size: 2210 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy-blitz_300.png Type: image/png Size: 11747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy-blitz_500.png Type: image/png Size: 12350 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy-blitz_1000.png Type: image/png Size: 11947 bytes Desc: not available URL: From eric at enthought.com Thu Jun 29 03:13:46 2006 From: eric at enthought.com (eric jones) Date: Thu, 29 Jun 2006 02:13:46 -0500 Subject: [SciPy-user] Anybody used weave.blitz to speed NumPy up? In-Reply-To: References: <44A32C57.1080901@ee.byu.edu> <44A36934.80300@ieee.org> Message-ID: <44A37DAA.8090600@enthought.com> The attached image shows what I get with Python 2.3.5, Enthought Edition (0.9.7) on windows XP laptop. Weave.blitz appears to behave fine showing a consistent and significant benefit. It doesn't appear to affect this problem because the arrays are so large, but blitz does some checking to make sure the array sizes are all compatible before running calculations. This can be significant in some cases. You can try the following and see if it helps (but I doubt that is the issue). blitz(expr,check_size=0) eric -------------- next part -------------- A non-text attachment was scrubbed... Name: weave_test.png Type: image/png Size: 35153 bytes Desc: not available URL: From gnata at obs.univ-lyon1.fr Thu Jun 29 04:36:09 2006 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Thu, 29 Jun 2006 10:36:09 +0200 Subject: [SciPy-user] from scipy import * error. (outerproduct missing) Message-ID: <44A390F9.6050301@obs.univ-lyon1.fr> Hi, I have compiled the lastest svn of numpy and scipy. Using ipython, from numpy import * works well but from scipy import * fails : --------------------------------------------------------------------------- exceptions.ImportError Traceback (most recent call last) /home/gnata/ /usr/lib/python2.4/site-packages/scipy/linalg/__init__.py 6 from linalg_version import linalg_version as __version__ 7 ----> 8 from basic import * 9 from decomp import * 10 from matfuncs import * /usr/lib/python2.4/site-packages/scipy/linalg/basic.py 20 conjugate,ravel,r_,mgrid,take,ones,dot,transpose,sqrt,add,real 21 import numpy ---> 22 from numpy import asarray_chkfinite, outerproduct, concatenate, reshape, single 23 from numpy import matrix as Matrix 24 import calc_lwork ImportError: cannot import name outerproduct I dont know how old is this bug because I hardly ever use import * Xavier. -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles Andr? 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 28 Fax: +33 4 78 86 83 86 E-mail: gnata at obs.univ-lyon1.fr ############################################ From oliphant.travis at ieee.org Thu Jun 29 05:14:52 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 29 Jun 2006 03:14:52 -0600 Subject: [SciPy-user] from scipy import * error. (outerproduct missing) In-Reply-To: <44A390F9.6050301@obs.univ-lyon1.fr> References: <44A390F9.6050301@obs.univ-lyon1.fr> Message-ID: <44A39A0C.4060205@ieee.org> Xavier Gnata wrote: > Hi, > > I have compiled the lastest svn of numpy and scipy. > Using ipython, from numpy import * works well but > from scipy import * fails : > > Thanks for reporting. You must not have the latest scipy SVN (or must not have installed it) because outerproduct is not present in scipy SVN. Best, -Travis From oliphant.travis at ieee.org Thu Jun 29 05:22:33 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 29 Jun 2006 03:22:33 -0600 Subject: [SciPy-user] Anybody used weave.blitz to speed NumPy up? In-Reply-To: References: <44A32C57.1080901@ee.byu.edu> <44A36A01.5040500@ieee.org> Message-ID: <44A39BD9.7080001@ieee.org> Fernando Perez wrote: > On 6/28/06, Travis Oliphant wrote: > >> Fernando, could you send the source code of the function that was >> created and compiled by weave. > > Here it goes, Travis. I'm using ubuntu dapper, which comes with > python 2.4.3 and gcc 4.0.3. > > I'm testing a bit more... > Problem discovered. I have a "debug" build of Python (i.e. --enable-debug when compiling Python). The debug-build of Python executes code more slowly but keeps track of reference counts. Apparently, this is just enough to diminish any gains by weave.blitz. It also probably compiles with fewer optimizations set. Using a non debug build of Python, I see the blitz improvement. Sorry to trouble everyone... -Travis From oliphant.travis at ieee.org Thu Jun 29 06:13:43 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 29 Jun 2006 04:13:43 -0600 Subject: [SciPy-user] New feature added to weave's standard array type converters Message-ID: <44A3A7D7.9090407@ieee.org> In all my playing with weave, I added a new feature to the standard array type converters. If you have a NumPy array bound to the name 'a' in Python that is passed into weave with the standard type converter, then the macros A1(i), A2(i,j), A3(i,j,k) and A4(i,j,k,l) will be defined for you to use to de-reference the elements of the array as the appropriate data-type. All of the macros are defined --- use the one that actually applies to your array. This feature lets you write integer-index code in a readable fashion that will work for arbitrarily strided arrays. The macros are named 1, 2, 3, and 4 where is the python name converted to all capital letters. -Travis From akinoame1 at gmail.com Thu Jun 29 06:26:48 2006 From: akinoame1 at gmail.com (Denis Simakov) Date: Thu, 29 Jun 2006 13:26:48 +0300 Subject: [SciPy-user] Anybody used weave.blitz to speed NumPy up? In-Reply-To: References: <44A32C57.1080901@ee.byu.edu> <44A36934.80300@ieee.org> Message-ID: <73eb51090606290326j23c954b4rd0703661da0abd3@mail.gmail.com> blitz works fine (consistently faster than numpy) with Fernando's last script for the following configuration: laptop with Ubuntu Dapper gcc 4.0.3 python 2.4.3 numpy 0.9.9.2631 scipy 0.5.0.1980 Denis From doug-scipy at sadahome.ca Thu Jun 29 18:30:51 2006 From: doug-scipy at sadahome.ca (Doug Latornell) Date: Thu, 29 Jun 2006 15:30:51 -0700 Subject: [SciPy-user] using (c)vode [was: odeint rtol and atol default values)] In-Reply-To: <6279c0a40606271547n64a64293p13a42807692eefcb@mail.gmail.com> References: <449EC51C.5050700@gmx.net> <449EDA2A.6030303@gmx.net> <6279c0a40606271239o5606299euca69fa1ba9c3ded9@mail.gmail.com> <44A1AD69.8040303@gmx.net> <6279c0a40606271547n64a64293p13a42807692eefcb@mail.gmail.com> Message-ID: <6279c0a40606291530p72eebe35g3ac9e808eaa21db0@mail.gmail.com> It was the abstract of this paper http://www.llnl.gov/CASC/nsde/pubs/207532.pdf that convinced me to give the vode algorithm a try. Doug On 6/27/06, Doug Latornell wrote: > > I believe you're correct re: Octave and lsode. > > I tracked through documentation to the sources pages for vode, and the > other algorithms, read them and decided that vode was well suited to my > problem. Can't recall the exact issues that convinced me now, though I > think its ability to adapt to stiff problems was one point. > > Accuracy was equivalent to Octave/lsode, but I don't have a closed form > case to compare to. I'm modelling production process data with *lots* of > sources of deviation. Both integrators give me acceptable predictions (for > my purposes). > > Doug > > > On 6/27/06, Steve Schmerler wrote: > > > > Doug Latornell wrote: > > > Hi Steve; > > > > > > I've been happily using the ode class with the vode integrator for a > > few > > > months now. I rewrote a model from Octave into Python/NumPy/SciPy. > > > Agreement between the Python/ode/vode code and the Octave one was > > good. > > > The model is substantially faster in SciPy than it was in Octave, but > > > there are a lot of factors that changed (processor, OS, etc.). > > > > Did you have a special reason for chosing vode/ode over lsoda/odeint > > (speed, accuracy, ...)? > > > > If I'm right, Octave uses lsode > > (http://www.gnu.org/software/octave/doc/interpreter/Ordinary-Differential-Equations.html#Ordinary-Differential-Equations > > ) > > > > cheers, > > steve > > > > -- > > Random number generation is the art of producing pure gibberish as > > quickly as possible. > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Jun 30 05:05:37 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 30 Jun 2006 11:05:37 +0200 Subject: [SciPy-user] Triangulation of L-shaped domains Message-ID: <44A4E961.8000208@iam.uni-stuttgart.de> Hi all, I have installed delaunay from the sandbox. How can I triangulate L-shaped domains ? My first try is somewhat unsatisfactory (delaunay.png) ? How can I remove the unwanted triangles down to the right ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: delaunay.png Type: image/png Size: 66661 bytes Desc: not available URL: From t.zito at biologie.hu-berlin.de Fri Jun 30 08:51:38 2006 From: t.zito at biologie.hu-berlin.de (Tiziano Zito) Date: Fri, 30 Jun 2006 14:51:38 +0200 Subject: [SciPy-user] MDP-2.0 released Message-ID: <20060630125138.GA16597@itb.biologie.hu-berlin.de> MDP version 2.0 has been released! What is it? ----------- Modular toolkit for Data Processing (MDP) is a data processing framework written in Python. From the user's perspective, MDP consists of a collection of trainable supervised and unsupervised algorithms that can be combined into data processing flows. The base of readily available algorithms includes Principal Component Analysis, two flavors of Independent Component Analysis, Slow Feature Analysis, Gaussian Classifiers, Growing Neural Gas, Fisher Discriminant Analysis, and Factor Analysis. From the developer's perspective, MDP is a framework to make the implementation of new algorithms easier. MDP takes care of tedious tasks like numerical type and dimensionality checking, leaving the developer free to concentrate on the implementation of the training and execution phases. The new elements then automatically integrate with the rest of the library. As its user base is increasing, MDP might be a good candidate for becoming a common repository of user-supplied, freely available, Python implemented data processing algorithms. Resources --------- Download: http://sourceforge.net/project/showfiles.php?group_id=116959 Homepage: http://mdp-toolkit.sourceforge.net Mailing list: http://sourceforge.net/mail/?group_id=116959 What's new in version 2.0? -------------------------- MDP 2.0 introduces some important structural changes. It is now possible to implement nodes with multiple training phases and even nodes with an undetermined number of phases. This allows for example the implementation of algorithms that need to collect some statistics on the whole input before proceeding with the actual training, or others that need to iterate over a training phase until a convergence criterion is satisfied. The ability to train each phase using chunks of input data is maintained if the chunks are generated with iterators. Nodes that require supervised training can be defined in a very straightforward way by passing additional arguments (e.g., labels or a target output) to the 'train' method. New algorithms have been added, expanding the base of readily available basic data processing elements. MDP is now based exclusively on the NumPy Python numerical extension. -- Tiziano Zito Institute for Theoretical Biology Humboldt-Universitaet zu Berlin Invalidenstrasse, 43 D-10115 Berlin, Germany Pietro Berkes Gatsby Computational Neuroscience Unit Alexandra House, 17 Queen Square London WC1N 3AR, United Kingdom From jdhunter at ace.bsd.uchicago.edu Fri Jun 30 09:23:44 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Fri, 30 Jun 2006 08:23:44 -0500 Subject: [SciPy-user] Triangulation of L-shaped domains In-Reply-To: <44A4E961.8000208@iam.uni-stuttgart.de> (Nils Wagner's message of "Fri, 30 Jun 2006 11:05:37 +0200") References: <44A4E961.8000208@iam.uni-stuttgart.de> Message-ID: <87wtaysrof.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Nils" == Nils Wagner writes: Nils> Hi all, I have installed delaunay from the sandbox. How can Nils> I triangulate L-shaped domains ? Nils> My first try is somewhat unsatisfactory (delaunay.png) ? Nils> How can I remove the unwanted triangles down to the right ? delaunay assumes a convex shape, which your domain is not. I think you'll need a more sophisticated mesh algorithm. You might succeed by breaking your L shaped domain into two rectangles and treating them separately. JDH From nwagner at iam.uni-stuttgart.de Fri Jun 30 09:52:40 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 30 Jun 2006 15:52:40 +0200 Subject: [SciPy-user] Triangulation of L-shaped domains In-Reply-To: <87wtaysrof.fsf@peds-pc311.bsd.uchicago.edu> References: <44A4E961.8000208@iam.uni-stuttgart.de> <87wtaysrof.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <44A52CA8.6060303@iam.uni-stuttgart.de> John Hunter wrote: >>>>>> "Nils" == Nils Wagner writes: >>>>>> > > Nils> Hi all, I have installed delaunay from the sandbox. How can > Nils> I triangulate L-shaped domains ? > > Nils> My first try is somewhat unsatisfactory (delaunay.png) ? > Nils> How can I remove the unwanted triangles down to the right ? > > delaunay assumes a convex shape, which your domain is not. I think > you'll need a more sophisticated mesh algorithm. > > A short note in the docstring would be very helpful. **delaunay assumes a convex shape** Nils help (delaunay) results in Help on package scipy.sandbox.delaunay in scipy.sandbox: NAME scipy.sandbox.delaunay - Delaunay triangulation and interpolation tools. FILE /usr/lib64/python2.4/site-packages/scipy/sandbox/delaunay/__init__.py DESCRIPTION :Author: Robert Kern :Copyright: Copyright 2005 Robert Kern. :License: BSD-style license. See LICENSE.txt in the scipy source directory. PACKAGE CONTENTS _delaunay interpolate setup testfuncs triangulate > You might succeed by breaking your L shaped domain into two rectangles > and treating them separately. > > > > JDH > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Fri Jun 30 12:32:28 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 30 Jun 2006 11:32:28 -0500 Subject: [SciPy-user] Triangulation of L-shaped domains In-Reply-To: <44A52CA8.6060303@iam.uni-stuttgart.de> References: <44A4E961.8000208@iam.uni-stuttgart.de> <87wtaysrof.fsf@peds-pc311.bsd.uchicago.edu> <44A52CA8.6060303@iam.uni-stuttgart.de> Message-ID: <44A5521C.6060206@gmail.com> Nils Wagner wrote: > John Hunter wrote: >>>>>>> "Nils" == Nils Wagner writes: >>>>>>> >> Nils> Hi all, I have installed delaunay from the sandbox. How can >> Nils> I triangulate L-shaped domains ? >> >> Nils> My first try is somewhat unsatisfactory (delaunay.png) ? >> Nils> How can I remove the unwanted triangles down to the right ? >> >> delaunay assumes a convex shape, which your domain is not. I think >> you'll need a more sophisticated mesh algorithm. > > A short note in the docstring would be very helpful. > **delaunay assumes a convex shape** Actually, it doesn't "assume" any shape at all. It computes a Delaunay triangulation of a set of points irrespective of any boundary edges that you might have wanted. That's what an (unqualified) Delaunay triangulation is. Docstrings are not the place for tutorials on basic concepts. Google works quite well. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From travis at enthought.com Fri Jun 30 16:59:20 2006 From: travis at enthought.com (Travis N. Vaught) Date: Fri, 30 Jun 2006 15:59:20 -0500 Subject: [SciPy-user] ANN: SciPy 2006 Conference Reminder Message-ID: <44A590A8.5040705@enthought.com> The *SciPy 2006 Conference* is scheduled for Thursday and Friday, August 17-18, 2006 at CalTech with Sprints and Tutorials Monday-Wednesday, August 14-16. Conference details are at http://www.scipy.org/SciPy2006 The deadlines for submitting abstracts and early registration are approaching... Call for Presenters ------------------- If you are interested in presenting at the conference, you may submit an abstract in Plain Text, PDF or MS Word formats to abstracts at scipy.org -- the deadline for abstract submission is July 7, 2006. Papers and/or presentation slides are acceptable and are due by August 4, 2006. Registration: ------------- Early registration ($100.00) is still available through July 14. You may register online at http://www.enthought.com/scipy06. Registration includes breakfast and lunch Thursday & Friday and a very nice dinner Thursday night. After July 14, 2006, registration will cost $150.00. Tutorials and Sprints --------------------- This year the Sprints (Monday and Tuesday, August 14-15) and Tutorials (Wednesday, August 16) are no additional charge (you're on your own for food on those days, though). Remember to include these days in your travel plans. The following topics are presented as Tutorials Wednesday (more info here: http://www.scipy.org/SciPy2006/TutorialSessions): - "3D visualization in Python using tvtk and MayaVi" - "Scientific Data Analysis and Visualization using IPython and Matplotlib." - "Building Scientific Applications using the Enthought Tool Suite (Envisage, Traits, Chaco, etc.)" - "NumPy (migration from Numarray & Numeric, overview of NumPy)" The Sprint topics are under discussion here: http://www.scipy.org/SciPy2006/CodingSprints See you in August! Travis From William.Hunter at mmhgroup.com Fri Jun 30 08:40:42 2006 From: William.Hunter at mmhgroup.com (William Hunter) Date: Fri, 30 Jun 2006 14:40:42 +0200 Subject: [SciPy-user] Struggling to make use of sparse Message-ID: <98AA9E629A145D49A1A268FA6DBA70B424902C@mmihserver01.MMIH01.local> Old Matlab user here, I need some help on using 'sparse'. Not a lot of documentation on it, and I suck at programming, so there you go... I have a (sparse) array [K] and vector {F}. I need to solve for {U}. If these were 'normal' matrices, I would do the following: >>> import numpy as N >>> U = N.linalg.solve(K,F) I know how to get the matrices (both K and F) in sparse format with 'lil_matrix', but I get an error if I try the following: >>> import scipy.spare as SS >>> U = SS.sparse.solve(K,F) What am I doing wrong? Somebody who's done FEA type stuff will be able to answer. Thanks, William -------------- next part -------------- An HTML attachment was scrubbed... URL: