From sturla at molden.no Thu Oct 1 03:12:19 2009 From: sturla at molden.no (Sturla Molden) Date: Thu, 01 Oct 2009 09:12:19 +0200 Subject: [SciPy-User] optimize.leastsq does not converge with full Jacobian? Message-ID: <4AC45653.5040201@molden.no> I was trying to fit some Michaelis-Menten data from Bates and Watts (1988) puromycin experiment to test non-linear regression in SciPy. I get some very strange results that I don't understand. First the non-linear model to be fitted is: y = Vmax * x / (x + Km) Not using Jacobian works: import numpy as np import scipy from scipy.linalg import qr, solve from scipy.optimize import leastsq # Bates and Watts (1988) puromycin data data = ((0.02, 47, 76), (0.06, 97, 107), (0.11, 123, 139), (0.22, 152, 159), (0.56, 191, 201), (1.10, 200, 207)) data = np.array(data) X = data[:,0:1].repeat(2,axis=1).flatten() Y = data[:,1:].flatten() # initial fit from Linewaver-Burk plot y = Y**-1 x = np.vstack((np.ones(X.shape),X**-1)).T q,r = qr(x, econ=True) b = solve(r, (np.mat(y) * np.mat(q)).T).ravel() # Michaelis-Menten fit from Lineweaver-Burk Vmax = 1.0 / b[0] Km = b[1] * Vmax # refit with Levenberg-Marquardt method def michaelis_menten(t, x): Vmax, Km = t return Vmax*x/(x + Km) def residuals(t, x, y): return y - michaelis_menten(t, x) (Vmax,Km),ierr = leastsq(residuals, (Vmax,Km), args=(X,Y)) This gives Vmax,Km = 2.12683559e+02, 6.41209954e-02, which is the "correct" answer. However, when I use the full Jacobian, def jacobian(t, x, y): j = np.zeros((2,x.shape[0])) Vmax, Km = t j[0,:] = x/(Km + x) j[1,:] = -Vmax*x/((Km + x)**2) return j (Vmax,Km),ierr = leastsq(residuals, (Vmax,Km), args=(X,Y), Dfun=jacobian, col_deriv=1) I always get the start value of (Vmax,Km) returned, which is (195.8027, 0.0484065). What on earth is going on? Is there a bug in SciPy or am I being incredibly stupid? Sturla Molden From pav+sp at iki.fi Thu Oct 1 03:41:15 2009 From: pav+sp at iki.fi (Pauli Virtanen) Date: Thu, 1 Oct 2009 07:41:15 +0000 (UTC) Subject: [SciPy-User] optimize.leastsq does not converge with full Jacobian? References: <4AC45653.5040201@molden.no> Message-ID: Thu, 01 Oct 2009 09:12:19 +0200, Sturla Molden wrote: [clip] > def residuals(t, x, y): > return y - michaelis_menten(t, x) [clip] > def jacobian(t, x, y): > j = np.zeros((2,x.shape[0])) > Vmax, Km = t > j[0,:] = x/(Km + x) > j[1,:] = -Vmax*x/((Km + x)**2) > return j Sign error. -- Pauli Virtanen From sebastian.walter at gmail.com Thu Oct 1 04:18:13 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Thu, 1 Oct 2009 10:18:13 +0200 Subject: [SciPy-User] optimize.leastsq does not converge with full Jacobian? In-Reply-To: <4AC45653.5040201@molden.no> References: <4AC45653.5040201@molden.no> Message-ID: is it ok with you if I use your code in the unit test of the automatic differentiation tool pyadolc? (http://github.com/b45ch1/pyadolc) regards, Sebastian On Thu, Oct 1, 2009 at 9:12 AM, Sturla Molden wrote: > I was trying to fit some Michaelis-Menten data from Bates and Watts > (1988) puromycin experiment to test non-linear regression in SciPy. I > get some very strange results that I don't understand. > > First the non-linear model to be fitted is: > > y = Vmax * x / (x + Km) > > Not using Jacobian works: > > import numpy as np > import scipy > from scipy.linalg import qr, solve > from scipy.optimize import leastsq > > > # Bates and Watts (1988) puromycin data > > data = ((0.02, 47, 76), > (0.06, 97, 107), > (0.11, 123, 139), > (0.22, 152, 159), > (0.56, 191, 201), > (1.10, 200, 207)) > > data = np.array(data) > > X = data[:,0:1].repeat(2,axis=1).flatten() > Y = data[:,1:].flatten() > > # initial fit from Linewaver-Burk plot > y = Y**-1 > x = np.vstack((np.ones(X.shape),X**-1)).T > q,r = qr(x, econ=True) > b = solve(r, (np.mat(y) * np.mat(q)).T).ravel() > > # Michaelis-Menten fit from Lineweaver-Burk > Vmax = 1.0 / b[0] > Km = b[1] * Vmax > > # refit with Levenberg-Marquardt method > > def michaelis_menten(t, x): > Vmax, Km = t > return Vmax*x/(x + Km) > > def residuals(t, x, y): > return y - michaelis_menten(t, x) > > (Vmax,Km),ierr = leastsq(residuals, (Vmax,Km), args=(X,Y)) > > This gives Vmax,Km = 2.12683559e+02, 6.41209954e-02, which is the > "correct" answer. > > However, when I use the full Jacobian, > > def jacobian(t, x, y): > j = np.zeros((2,x.shape[0])) > Vmax, Km = t > j[0,:] = x/(Km + x) > j[1,:] = -Vmax*x/((Km + x)**2) > return j > > (Vmax,Km),ierr = leastsq(residuals, (Vmax,Km), args=(X,Y), > Dfun=jacobian, col_deriv=1) > > I always get the start value of (Vmax,Km) returned, which is (195.8027, > 0.0484065). > > What on earth is going on? > > Is there a bug in SciPy or am I being incredibly stupid? > > > Sturla Molden > > > > > > > > > > > > > > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From super.inframan at gmail.com Thu Oct 1 07:08:42 2009 From: super.inframan at gmail.com (Gustaf Nilsson) Date: Thu, 1 Oct 2009 12:08:42 +0100 Subject: [SciPy-User] memoryError when i have plenty of available ram In-Reply-To: <4AC41B97.6080509@ar.media.kyoto-u.ac.jp> References: <5b8d13220909301825ta758091pa25f269b6f91ca45@mail.gmail.com> <45d1ab480909302000w629e1f6eudada779a4839d7e0@mail.gmail.com> <4AC41B97.6080509@ar.media.kyoto-u.ac.jp> Message-ID: yeah im aware of the 2 gb limit, and i expect to upgrade to 64bit (w7) next month... still, my program doesnt even get close to that before the memoryError. Will see if i can make a small script that does the same thing when i get home fron work tonight. Hey you are in kyoto? ive spent a few months in the north of kyoto (matsugasaki/kuramaguchi) Will be going back for a wedding in april (sakura!) On Thu, Oct 1, 2009 at 4:01 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > David Goldsmith wrote: > > On Wed, Sep 30, 2009 at 6:25 PM, David Cournapeau > > wrote: > > > > On Thu, Oct 1, 2009 at 3:58 AM, Gustaf Nilsson > > > wrote: > > > Hiya > > > > > > I know someone just started a memory thread, but i didnt wanna > > hijack it.. > > > My image processing app that im working on seems to crash with > > "memoryError" > > > when it hits about 1.1gb of mem usage (same on two computers; > > has 2/4gb ram, > > > xp 32bit) > > > > If possible, a small script which reproduces the problem would be > > helpful. > > > > Keep in mind that on windows, by default, your python script cannot > > use more than 2 Gb anyway, even if you have 4Gb of memory. > > > > > > Interesting. Is this true in Vista? Windows 7? > > It is true for (at least) most OSes, actually, and a limitation of 32 > bits addressing. The only workaround is to use several processes. The > origin is that a process cannot 'see' more than 4 Gb in 32 bits, and > part of it has to be reserved for the kernel - windows and linux by > default limit the virtual adressing to 2 Gb per process in the userland. > There are options to split between 3 Gb user /1Gb kernel or the contrary > in linux, and similar in windows. > > There is this pretty good explanation here for linux for the gory > details: http://kerneltrap.org/node/2450 (I would be surprised if > windows kernel was fundamentally different - except for the fork thing > of course). > > The true solution is to use a 64 bits OS. > > cheers, > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- ? ? ? ? ? ? ? ? ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Thu Oct 1 06:53:43 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 01 Oct 2009 19:53:43 +0900 Subject: [SciPy-User] memoryError when i have plenty of available ram In-Reply-To: References: <5b8d13220909301825ta758091pa25f269b6f91ca45@mail.gmail.com> <45d1ab480909302000w629e1f6eudada779a4839d7e0@mail.gmail.com> <4AC41B97.6080509@ar.media.kyoto-u.ac.jp> Message-ID: <4AC48A37.5090402@ar.media.kyoto-u.ac.jp> Gustaf Nilsson wrote: > yeah im aware of the 2 gb limit, and i expect to upgrade to 64bit (w7) > next month... > still, my program doesnt even get close to that before the memoryError. Depending on the action, it may be a bug in the called function, or the function is just memory hungry (it can also be a reference count error somewhere, but let's hope it won't go to that). > Hey you are in kyoto? ive spent a few months in the north of kyoto > (matsugasaki/kuramaguchi) Yes, still in Kyoto, in the north as well - but not as far as kurama (I am still in the city, in Demachiyanagi near kyodai). Sakura in Kyoto is nice :) cheers, David From sturla at molden.no Thu Oct 1 09:01:08 2009 From: sturla at molden.no (Sturla Molden) Date: Thu, 01 Oct 2009 15:01:08 +0200 Subject: [SciPy-User] optimize.leastsq does not converge with full Jacobian? In-Reply-To: References: <4AC45653.5040201@molden.no> Message-ID: <4AC4A814.6060405@molden.no> Pauli Virtanen skrev: > def jacobian(t, x, y): >> j = np.zeros((2,x.shape[0])) >> Vmax, Km = t >> j[0,:] = x/(Km + x) >> j[1,:] = -Vmax*x/((Km + x)**2) >> return j >> > > Sign error. > > Thanks Pauli. I differentiated the Michaelis-Menten function instead of the residuals. That goes for "incredibly stupid" then. Sturla From super.inframan at gmail.com Thu Oct 1 09:14:12 2009 From: super.inframan at gmail.com (Gustaf Nilsson) Date: Thu, 1 Oct 2009 14:14:12 +0100 Subject: [SciPy-User] memoryError when i have plenty of available ram In-Reply-To: <4AC48A37.5090402@ar.media.kyoto-u.ac.jp> References: <5b8d13220909301825ta758091pa25f269b6f91ca45@mail.gmail.com> <45d1ab480909302000w629e1f6eudada779a4839d7e0@mail.gmail.com> <4AC41B97.6080509@ar.media.kyoto-u.ac.jp> <4AC48A37.5090402@ar.media.kyoto-u.ac.jp> Message-ID: Hey Kuramaguchi isnt the same as Kurama. Its actually very close, just one stop north of Imadegawa. The wedding im going to is at the shimogamo temple, just where the rivers meet. aanyways... I think the program releases its memory properly, because if i look at memory usage it actually goes down once a certain part of the process is done. (and then up again as the next one starts) the script is completely filled with nested eval()'s so its quite possible theres evil goin on wrong deep down in the code somewhere.. but yeah, when i come home from work tonight ill try and see if i can isolate the problem and show the code.. Gusty On Thu, Oct 1, 2009 at 11:53 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Gustaf Nilsson wrote: > > yeah im aware of the 2 gb limit, and i expect to upgrade to 64bit (w7) > > next month... > > still, my program doesnt even get close to that before the memoryError. > > Depending on the action, it may be a bug in the called function, or the > function is just memory hungry (it can also be a reference count error > somewhere, but let's hope it won't go to that). > > > Hey you are in kyoto? ive spent a few months in the north of kyoto > > (matsugasaki/kuramaguchi) > > Yes, still in Kyoto, in the north as well - but not as far as kurama (I > am still in the city, in Demachiyanagi near kyodai). Sakura in Kyoto is > nice :) > > cheers, > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- ? ? ? ? ? ? ? ? ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Thu Oct 1 09:15:11 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 1 Oct 2009 09:15:11 -0400 Subject: [SciPy-User] Anyone have an example of using arpack (scipy.sparse.linalg.eigen)? In-Reply-To: <2588da420909302059y63abab76u4be547fdfda81b33@mail.gmail.com> References: <2588da420909302059y63abab76u4be547fdfda81b33@mail.gmail.com> Message-ID: <733ADE1E-97A6-4051-B85D-38534C7C1EAA@cs.toronto.edu> It should be noted that (AFAIK) that wrapper isn't complete. It's missing things e.g. sparse SVD, I think - it only wraps a few things. Unless all you're interested in is the eigen solver. David On 30-Sep-09, at 11:59 PM, Jeremy Conlin wrote: > I need to use the arpack wrapper in scipy. Does anyone have an > example of how they used this? This would be great to get me started > in my research. > > Thanks, > Jeremy > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From yosefmel at post.tau.ac.il Thu Oct 1 10:01:01 2009 From: yosefmel at post.tau.ac.il (Yosef Meller) Date: Thu, 1 Oct 2009 16:01:01 +0200 Subject: [SciPy-User] memoryError when i have plenty of available ram In-Reply-To: References: <4AC41B97.6080509@ar.media.kyoto-u.ac.jp> Message-ID: <200910011601.01519.yosefmel@post.tau.ac.il> On ??? ????? 01 ??????? 2009 13:08:42 Gustaf Nilsson wrote: > yeah im aware of the 2 gb limit, and i expect to upgrade to 64bit (w7) next > month... > still, my program doesnt even get close to that before the memoryError. > Will see if i can make a small script that does the same thing when i get > home fron work tonight. Maybe a memory fragmentation issue: if you're trying to allocate contiguous arrays that are larger than the largest set of contiguous free pages, the OS will fail even if you have memory. From jlconlin at gmail.com Thu Oct 1 10:39:56 2009 From: jlconlin at gmail.com (Jeremy Conlin) Date: Thu, 1 Oct 2009 08:39:56 -0600 Subject: [SciPy-User] Anyone have an example of using arpack (scipy.sparse.linalg.eigen)? In-Reply-To: <733ADE1E-97A6-4051-B85D-38534C7C1EAA@cs.toronto.edu> References: <2588da420909302059y63abab76u4be547fdfda81b33@mail.gmail.com> <733ADE1E-97A6-4051-B85D-38534C7C1EAA@cs.toronto.edu> Message-ID: <2588da420910010739u77f24cefr1ab1b51a7fec622d@mail.gmail.com> I'm really only interested in the Arnoldi package. I just need Arnoldi's method for a general linear operator. I will be giving the code a function used to apply the linear operator to a vector. Jeremy On Thu, Oct 1, 2009 at 7:15 AM, David Warde-Farley wrote: > It should be noted that (AFAIK) that wrapper isn't complete. It's > missing things e.g. sparse SVD, I think - it only wraps a few things. > Unless all you're interested in is the eigen solver. > > David > > On 30-Sep-09, at 11:59 PM, Jeremy Conlin wrote: > >> I need to use the arpack wrapper in scipy. ?Does anyone have an >> example of how they used this? ?This would be great to get me started >> in my research. >> >> Thanks, >> Jeremy >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From henrylindsaysmith at gmail.com Thu Oct 1 12:52:07 2009 From: henrylindsaysmith at gmail.com (ninjasmith) Date: Thu, 1 Oct 2009 09:52:07 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] zplane function equivalent Message-ID: <25702338.post@talk.nabble.com> Hi, I'm looking for a function that performs the equivalent of zplane in matlab. That is I have the coefficients of a filter and I would like to see the poles and zeros plotted on the zplane. anyone managed this before? I guess I could find the roots myself and then build a custom plot to do it but I'd rather not :) -- View this message in context: http://www.nabble.com/zplane-function-equivalent-tp25702338p25702338.html Sent from the Scipy-User mailing list archive at Nabble.com. From lorenzo.isella at gmail.com Thu Oct 1 13:21:48 2009 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 01 Oct 2009 19:21:48 +0200 Subject: [SciPy-User] Calculating (Conditional) Time Intervals Message-ID: <4AC4E52C.6010808@gmail.com> Dear All, Consider and array of this kind: 1 12 45 2 7 12 2 15 37 3 25 89 3 8 13 3 13 44 4 77 89 4 77 89 5 12 22 8 12 22 9 15 22 11 22 37 23 3 12 24 18 37 25 1 12 where the first column is time measured in some units. The other two columns are some ID's identifying infected individuals establishing a contact at the corresponding time. As you can see, there may be time-gaps in my recorded times and there may be repeated times if several contacts take place simultaneously. The ID's are always sorted out in such a way the ID number of the 2nd column is always smaller than the corresponding entry of the third column (I am obviously indexing everything from 1). Now, this is my problem: I want to look at a specific ID I will call A (let us say A is 12) and calculate all the time differences t_AC-t_AB for B!=C, i.e. all the time intervals between the most recent contact between A and B and the first subsequent contact between and A and C (which has to be different from B). An example to fix the ideas: A=12, B=22, C=1, then t_AB=8 (pick the most recent one before t_AC) t_AC=25, hence t_AC-t_AB=25-8=17. (but let me say it again: I want to be able to calculate all such intervals for any B and C on the fly). It should be clear at this point that the calculated t_AC-t_AB != t_AB-t_AC as some time-ordering is implicit in the definition (in t_AC-t_AB, AC contacts have to always be more recent than AB contacts). Even in the case of multiple disjointed AB and AC contacts, I always have to look for the closest time intervals in time. E.g. if I had 10 12 22 40 12 22 60 1 12 100 1 12 110 12 22 130 12 22 150 1 12 then I would work out the time intervals 60-40=20 and 150-130=20. Sorry for the long email, but any suggestion about how to calculate all this efficiently would help me a great deal. Many thanks Lorenzo From bruce at clearscienceinc.com Thu Oct 1 14:18:59 2009 From: bruce at clearscienceinc.com (Bruce Ford) Date: Thu, 1 Oct 2009 14:18:59 -0400 Subject: [SciPy-User] numpy.squeeze not squeezing In-Reply-To: References: <3d375d730909301416w4eb6dbbei9b47a22bd141b91a@mail.gmail.com> <3d375d730909301505q221473d6i450fcb23b14cfd58@mail.gmail.com> <3d375d730909301521o7a4652bm6b947a4ba76af726@mail.gmail.com> Message-ID: Ryan's solution works great. Finally...a break! Bruce --------------------------------------- Bruce W. Ford Clear Science, Inc. bruce at clearscienceinc.com On Wed, Sep 30, 2009 at 9:25 PM, Ryan May wrote: > On Wed, Sep 30, 2009 at 5:21 PM, Robert Kern wrote: >> 2009/9/30 Bruce Ford : >>> print type(swh1) ?#gave >>> >>> print type(swh) ?#gave >> >> Ah, yes. The latter is what I meant. >> >> Yup, my diagnosis is correct. np.squeeze() is interpreting swh as a >> scalar (or rank 0 array) with dtype=object rather than an array. You >> will have to get a real ndarray from the Variable. I am not familiar >> with the netcdf4 API, so you will have to refer to its documentation >> on how to do that. It won't be as simple as np.asarray(swh), I am >> afraid. > > If it's anything like the other NetCDF bindings, it's just: > > ? swh_arr = swh[:] > > Ryan > > -- > Ryan May > Graduate Research Assistant > School of Meteorology > University of Oklahoma > Sent from Norman, Oklahoma, United States > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From prabhu at aero.iitb.ac.in Thu Oct 1 16:48:57 2009 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Fri, 02 Oct 2009 02:18:57 +0530 Subject: [SciPy-User] [ANN] SciPy India conference in Dec. 2009 Message-ID: <4AC515B9.4070604@aero.iitb.ac.in> Greetings, The first "Scientific Computing with Python" conference in India (http://scipy.in) will be held from December 12th to 17th, 2009 at the Technopark in Trivandrum, Kerala, India (http://www.technopark.org/). The theme of the conference will be "Scientific Python in Action" with respect to application and teaching. We are pleased to have Travis Oliphant, the creator and lead developer of numpy (http://numpy.scipy.org) as the keynote speaker. Here is a rough schedule of the conference: Sat. Dec. 12 (conference) Sun. Dec. 13 (conference) Mon. Dec. 14 (tutorials) Tues. Dec. 15 (tutorials) Wed. Dec. 16 (sprint) Thu. Dec. 17 (sprint) The tutorial sessions will have two tracks, one specifically for teachers and one for the general public. There are no registration fees. Please register at: http://scipy.in The call for papers will be announced soon. This conference is organized by the FOSSEE project (http://fossee.in) funded by the Ministry of Human Resources and Development's National Mission on Education (NME) through Information and Communication Technology (ICT) jointly with SPACE-Kerala (http://www.space-kerala.org). Regards, Prabhu Ramachandran and Jarrod Millman From jsseabold at gmail.com Thu Oct 1 19:46:24 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 1 Oct 2009 19:46:24 -0400 Subject: [SciPy-User] Better way to multiply block diagonal object array? Message-ID: This is somewhat related to the block diagonal matrix thread from a few months back. I'm wondering if there's a better (faster, cleaner) way to do what I'm trying to do. Say I have a "block diagonal" matrix X = diag(x_1, x_2, x_3) where the x_# are 2d arrays that all have the same number of rows (but don't have to be square). I would like to do something like dot(X.T,X), but putting these all into a single array (eg., scipy.linalg.block_diag) doesn't do what I want. I've been messing with kron and the sparse functions, but I haven't hit on what I'm looking for yet. x_1 = np.arange(4).reshape(2,-1) x_2 = np.arange(4,8).reshape(2,-1) x_3 = np.arange(8,16).reshape(2,-1) X = np.zeros((3,3), dtype=object) X[0,0] = x_1 X[1,1] = x_2 X[2,2] = x_3 # The resulting matrix will be 3 x 3 with position i,j = np.dot(x_i.T,x_j) XTX = np.zeros((3,3), dtype=object) for i in range(3): for j in range(3): XTX[i,j] = np.dot(X[i,i].T,X[j,j]) I could even work with this if I could take a view on it, so that I have an 8x8 float. Any ideas? Skipper From jsseabold at gmail.com Thu Oct 1 19:52:14 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 1 Oct 2009 19:52:14 -0400 Subject: [SciPy-User] Better way to multiply block diagonal object array? In-Reply-To: References: Message-ID: On Thu, Oct 1, 2009 at 7:46 PM, Skipper Seabold wrote: > This is somewhat related to the block diagonal matrix thread from a > few months back. ?I'm wondering if there's a better (faster, cleaner) > way to do what I'm trying to do. > > Say I have a "block diagonal" matrix X = diag(x_1, x_2, x_3) where the > x_# are 2d arrays that all have the same number of rows (but don't > have to be square). ?I would like to do something like dot(X.T,X), but > putting these all into a single array (eg., scipy.linalg.block_diag) > doesn't do what I want. ?I've been messing with kron and the sparse > functions, but I haven't hit on what I'm looking for yet. > > x_1 = np.arange(4).reshape(2,-1) > x_2 = np.arange(4,8).reshape(2,-1) > x_3 = np.arange(8,16).reshape(2,-1) > X = np.zeros((3,3), dtype=object) > X[0,0] = x_1 > X[1,1] = x_2 > X[2,2] = x_3 > > # The resulting matrix will be 3 x 3 with position i,j = np.dot(x_i.T,x_j) > > XTX = np.zeros((3,3), dtype=object) > for i in range(3): > ? ?for j in range(3): > ? ? ? ?XTX[i,j] = np.dot(X[i,i].T,X[j,j]) > > I could even work with this if I could take a view on it, so that I > have an 8x8 float. ?Any ideas? > > Skipper > Err, I must have missed something when I was doing this earlier XX = scipy.linalg.block_diag(x_1, x_2, x_3) XTX2 = np.dot(XX.T,XX) Seems to work fine. Skipper From sebastian.walter at gmail.com Fri Oct 2 04:10:32 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Fri, 2 Oct 2009 10:10:32 +0200 Subject: [SciPy-User] optimize.leastsq does not converge with full Jacobian? In-Reply-To: <4AC4A814.6060405@molden.no> References: <4AC45653.5040201@molden.no> <4AC4A814.6060405@molden.no> Message-ID: I've added your example to the unit test of pyadolc: http://github.com/b45ch1/pyadolc/blob/master/tests/complicated_tests.py starting at line 248. Let me know if you have objections. On Thu, Oct 1, 2009 at 3:01 PM, Sturla Molden wrote: > Pauli Virtanen skrev: >> def jacobian(t, x, y): >>> j = np.zeros((2,x.shape[0])) >>> Vmax, Km = t >>> j[0,:] = x/(Km + x) >>> j[1,:] = -Vmax*x/((Km + x)**2) >>> return j >>> >> >> Sign error. >> >> > > Thanks Pauli. > > I differentiated the Michaelis-Menten function instead of the residuals. > That goes for "incredibly stupid" then. > > Sturla > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From gokhansever at gmail.com Fri Oct 2 10:40:17 2009 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Fri, 2 Oct 2009 09:40:17 -0500 Subject: [SciPy-User] Trend Detection Message-ID: <49d6b3500910020740u31f21d57p6760f4fefdc2ad88@mail.gmail.com> Hello, Recently, I have come across a new paper published with the title: Deterministic versus stochastic trends: Detection and challenges [1]. I am planning to experimentally apply (very preferably, without re-inventing the wheel :) some of the techniques that they adapted for trend detection (parametric and non-parametric ones) on my datasets. (Ahh, I don't know how will it would take for me to fully grasp what this is really means: "*including wavelet analysis, heuristic methods and by fitting fractionally integrated autoregressive moving average models.*") Are any of the mentioned approaches available in SciPy habitat? Any comments or suggestions are appreciated. --- [1] : Fatichi, S., S. M. Barbosa, E. Caporali, and M. E. Silva (2009), Deterministic versus stochastic trends: Detection and challenges, J. Geophys. Res., 114, D18121, doi:10.1029/2009JD011960. -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Fri Oct 2 11:25:29 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 2 Oct 2009 11:25:29 -0400 Subject: [SciPy-User] Trend Detection In-Reply-To: <49d6b3500910020740u31f21d57p6760f4fefdc2ad88@mail.gmail.com> References: <49d6b3500910020740u31f21d57p6760f4fefdc2ad88@mail.gmail.com> Message-ID: On 2-Oct-09, at 10:40 AM, G?khan Sever wrote: > Hello, > > Recently, I have come across a new paper published with the title: > Deterministic versus stochastic trends: Detection and challenges > [1]. I am planning to experimentally apply (very preferably, without > re-inventing the wheel :) some of the techniques that they adapted > for trend detection (parametric and non-parametric ones) on my > datasets. (Ahh, I don't know how will it would take for me to fully > grasp what this is really means: "including wavelet analysis, > heuristic methods and by fitting fractionally integrated > autoregressive moving average models.") http://en.wikipedia.org/wiki/Wavelets http://en.wikipedia.org/wiki/Autoregressive_moving_average_model > Are any of the mentioned approaches available in SciPy habitat? I'd have a look at the TimeSeries scikit, that seems like the right place. As for the wavelet transform there's http://wavelets.scipy.org/ , I've never used it though. David From josef.pktd at gmail.com Fri Oct 2 11:31:43 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 2 Oct 2009 11:31:43 -0400 Subject: [SciPy-User] Trend Detection In-Reply-To: <49d6b3500910020740u31f21d57p6760f4fefdc2ad88@mail.gmail.com> References: <49d6b3500910020740u31f21d57p6760f4fefdc2ad88@mail.gmail.com> Message-ID: <1cd32cbb0910020831n3bbd6d61p948a21a41861ffa8@mail.gmail.com> On Fri, Oct 2, 2009 at 10:40 AM, G?khan Sever wrote: > Hello, > > Recently, I have come across a new paper published with the title: > Deterministic versus stochastic trends: Detection and challenges [1]. I am > planning to experimentally apply (very preferably, without re-inventing the > wheel :) some of the techniques that they adapted for trend detection > (parametric and non-parametric ones) on my datasets. (Ahh, I don't know how > will it would take for me to fully grasp what this is really means: > "including wavelet analysis, heuristic methods and by fitting fractionally > integrated autoregressive moving average models.") > > Are any of the mentioned approaches available in SciPy habitat? > > Any comments or suggestions are appreciated. > > --- > [1] : Fatichi, S., S. M. Barbosa, E. Caporali, and M. E. Silva (2009), > Deterministic versus stochastic trends: Detection and challenges, J. > Geophys. Res., 114, D18121, doi:10.1029/2009JD011960. > > -- > G?khan I haven't seen anything directly for this, there are bits and pieces around that might make some of the functions pretty easy to write. pytrix has some unit root tests (adf, not sure about Phillips Perron) http://code.google.com/p/econpy/source/browse/#svn/trunk/pytrix Also some of the tests should be easy to write with statsmodels, since the regression and many statistics are available. In econometrics, there is a large literature (mostly Perron) about stochastic trend versus deterministic trends with structural breaks, but I never read the details. I don't think I have seen any of the non-parametric (incl. wavelets) in this context, but I would worry a lot about the power of the tests in this case. some arma timeseries functions are also available in statsmodels, including estimation and some work with impuls_response_functions. I did it so far only for stationary processes, differencing or deterministic detrending has to be done outside. (I never worked my way through fractionally integrated processes). Whatever you come up with, this would also be very interesting for econometrics and scikits.statsmodels. I looked at trend versus difference stationarity a long time ago in school, and one recommendation also was to work in the frequency domain, but I don't remember the details. I don't know if any of the frequency domain functions in nitime (of nipy origin) are useful for this. Josef > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From henrylindsaysmith at gmail.com Fri Oct 2 12:03:28 2009 From: henrylindsaysmith at gmail.com (ninjasmith) Date: Fri, 2 Oct 2009 09:03:28 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] zplane function equivalent In-Reply-To: <25702338.post@talk.nabble.com> References: <25702338.post@talk.nabble.com> Message-ID: <25716724.post@talk.nabble.com> replying to my own post. using the functon scipy.signal.tf2zpk gives the poles and zeros which can then be plotted using scatter(real(p),imag(p)) scatter(real(z),imag(z)) ninjasmith wrote: > > Hi, > > I'm looking for a function that performs the equivalent of zplane in > matlab. That is I have the coefficients of a filter and I would like to > see the poles and zeros plotted on the zplane. > > anyone managed this before? I guess I could find the roots myself and > then build a custom plot to do it but I'd rather not :) > -- View this message in context: http://www.nabble.com/zplane-function-equivalent-tp25702338p25716724.html Sent from the Scipy-User mailing list archive at Nabble.com. From denis-bz-gg at t-online.de Fri Oct 2 12:52:18 2009 From: denis-bz-gg at t-online.de (denis) Date: Fri, 2 Oct 2009 09:52:18 -0700 (PDT) Subject: [SciPy-User] a small example of scipy.ndimage.map_coordinates Message-ID: <74bd6125-954d-4ae3-bef3-791a81b0fb67@y21g2000yqn.googlegroups.com> Folks, here is a small tutorial example of scipy.ndimage.map_coordinates: Say Cities is an n x 2 array of [latitide,longitude] coordinates, like Paris = [48.9, 2.4] Rome = [41.9, 12.5] Greenwich = [51.5, 0] Cities = np.array([ Paris, Rome, Greenwich ]) and A is a 91 x 360 array of temperatures at integer [lat,long] -- A[0] along the equator, A[:,0] along the prime meridian through Greenwich. Then ................................................................................ z = scipy.ndimage.map_coordinates( A, Cities.T, order=order ) ................................................................................ is the 3 temperatures at Paris, Rome and Greenwich -- approximately, depending on order. The transpose Cities.T is used because map_coordinates takes columns, not rows. ("RuntimeError: invalid shape for coordinate array" may mean that you forgot the .T .) If order is 0, map_coordinates rounds [lat,long] to the nearest integers: the temperature at Paris is approximated by A[50,2]. If 1, it does bilinear interpolation in the square with corners A[48,2], A[48,3], A[49,2], A[49,3] for Paris. If 2, it does quadratic interpolation over the 9 points A[48:51, 1:4]. And so on, up to order 5; the default is order=3 (Catmull-Rom ?) Order 1, bilinear, is much faster than 2 or 3. What happens to A[51,-1] etc. west of Greenwich ? See the mode= option. Of course the values in A may be arrays -- colors, sounds, anything that can be blended or interpolated -- not just scalars. Links: http://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html http://www.scipy.org/Cookbook/Interpolation http://en.wikipedia.org/wiki/Multivariate_interpolation ff. For an introduction to interpolation methods, see ... NR ? For the reverse problem of turning scattered data to a regular grid, see http://matplotlib.sourceforge.net/api/mlab_api.html#matplotlib.mlab.griddata . From josef.pktd at gmail.com Fri Oct 2 15:14:03 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 2 Oct 2009 15:14:03 -0400 Subject: [SciPy-User] Trend Detection In-Reply-To: <1cd32cbb0910020831n3bbd6d61p948a21a41861ffa8@mail.gmail.com> References: <49d6b3500910020740u31f21d57p6760f4fefdc2ad88@mail.gmail.com> <1cd32cbb0910020831n3bbd6d61p948a21a41861ffa8@mail.gmail.com> Message-ID: <1cd32cbb0910021214j28dfe394w6ecd63a76583db45@mail.gmail.com> On Fri, Oct 2, 2009 at 11:31 AM, wrote: > On Fri, Oct 2, 2009 at 10:40 AM, G?khan Sever wrote: >> Hello, >> >> Recently, I have come across a new paper published with the title: >> Deterministic versus stochastic trends: Detection and challenges [1]. I am >> planning to experimentally apply (very preferably, without re-inventing the >> wheel :) some of the techniques that they adapted for trend detection >> (parametric and non-parametric ones) on my datasets. (Ahh, I don't know how >> will it would take for me to fully grasp what this is really means: >> "including wavelet analysis, heuristic methods and by fitting fractionally >> integrated autoregressive moving average models.") >> >> Are any of the mentioned approaches available in SciPy habitat? >> >> Any comments or suggestions are appreciated. >> >> --- >> [1] : Fatichi, S., S. M. Barbosa, E. Caporali, and M. E. Silva (2009), >> Deterministic versus stochastic trends: Detection and challenges, J. >> Geophys. Res., 114, D18121, doi:10.1029/2009JD011960. >> >> -- >> G?khan > > I haven't seen anything directly for this, there are bits and pieces > around that might make some of the functions pretty easy to write. > > pytrix has some unit root tests (adf, not sure about Phillips Perron) > http://code.google.com/p/econpy/source/browse/#svn/trunk/pytrix > > Also some of the tests should be easy to write with statsmodels, since > the regression and many statistics are available. > > In econometrics, there is a large literature (mostly Perron) about > stochastic trend versus deterministic trends with structural breaks, > but I never read the details. > > I don't think I have seen any of the non-parametric (incl. wavelets) > in this context, but I would worry a lot about the power of the tests > in this case. > > some arma timeseries functions are also available in statsmodels, > including estimation and some work with impuls_response_functions. I > did it so far only for stationary processes, differencing or > deterministic detrending has to be done outside. (I never worked my > way through fractionally integrated processes). > > Whatever you come up with, this would also be very interesting for > econometrics and scikits.statsmodels. > > I looked at trend versus difference stationarity a long time ago in > school, and one recommendation also was to work in the frequency > domain, but I don't remember the details. > I don't know if any of the frequency domain functions in nitime (of > nipy origin) are useful for this. > > Josef > I had to skim some papers to get the basic idea about fractional integration. for the record and because they might be useful for ARFIMA modelling using lfilter to get fractional integration polynomial (1-L)^d, d<1 `ri` is (1-L)^(-d), d<1 >>> d=0.4; j=np.arange(1000);ri=gamma(d+j)/(gamma(j+1)*gamma(d)) >>> lfilter([1], ri, [1]+[0]*30)[[5,10,20,25]] array([-0.029952 , -0.01100641, -0.00410998, -0.00299859]) >>> d=0.4; j=np.arange(1000);ri=gamma(d+j)/(gamma(j+1)*gamma(d)) >>> # (1-L)^d, d<1 is >>> lfilter([1], ri, [1]+[0]*30) array([ 1. , -0.4 , -0.12 , -0.064 , -0.0416 , -0.029952 , -0.0229632 , -0.01837056, -0.01515571, -0.01279816, -0.01100641, -0.0096056 , -0.00848495, -0.00757118, -0.00681406, -0.00617808, -0.0056375 , -0.00517324, -0.00477087, -0.00441934, -0.00410998, -0.00383598, -0.00359188, -0.00337324, -0.00317647, -0.00299859, -0.00283712, -0.00269001, -0.00255551, -0.00243214, -0.00231864]) >>> # verified for points [[5,10,20,25]] at 4 decimals with Bhardwaj, Swanson, Journal of Eonometrics 2006 >>> Josef > >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > From pgmdevlist at gmail.com Fri Oct 2 15:57:58 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 2 Oct 2009 15:57:58 -0400 Subject: [SciPy-User] Trend Detection In-Reply-To: <49d6b3500910020740u31f21d57p6760f4fefdc2ad88@mail.gmail.com> References: <49d6b3500910020740u31f21d57p6760f4fefdc2ad88@mail.gmail.com> Message-ID: <95D45A51-6CB0-427B-8970-065855CBFB28@gmail.com> On Oct 2, 2009, at 10:40 AM, G?khan Sever wrote: > Hello, > > Recently, I have come across a new paper published with the title: > Deterministic versus stochastic trends: Detection and challenges > [1]. I am planning to experimentally apply (very preferably, without > re-inventing the wheel :) some of the techniques that they adapted > for trend detection (parametric and non-parametric ones) on my > datasets. (Ahh, I don't know how will it would take for me to fully > grasp what this is really means: "including wavelet analysis, > heuristic methods and by fitting fractionally integrated > autoregressive moving average models.") > > Are any of the mentioned approaches available in SciPy habitat? * Deseasonalization: if you follow the basic approach of the authors (subtract the mean, divide by the std error), you'll find scikits.timeseries quite helpful. If you want STL/loess, you can easily integrate Cleveland's routines in Scipy w/ f2py (there used to be a port somewhere, I know I had one, must still be hidden on a hard disk. Contact me). * Stationarity: I've worked with a method called SiNos (SIgnificant NOn-Stationarities) that test whether variations in mean/variance/ lag-1 autocorrelation are significant or not at different time scales. Works fine on continuous data, trikcier on 'discrete' ones (for example, daily precipitation). It's an adaptation of SiZer (Significant Zero-Crossing of the first derivatives), a technique to find significant features in a distribution. I have a full Scipy port for SiNos, with dosc, not posted yet for different reasons (one is that I'm not sure of the license type). More info: http://www3.interscience.wiley.com/journal/119402153/abstract?CRETRY=1&SRETRY=0 * LRD/Hurst coeffs: check the Koutsoyiannis references in the paper, they're quite useful. Finding the Hurst coefficient is omething I'd be interested in seeing. It's on one of my todo lists, but rather low... * Trend detection: I have some pieces of code that find the position of potential change-points, either with a parametric technique (OLS) or a more robust one (derived from Mann-Kendall). You're stuck with analyzing the data at the observed time scale and LRD will likely throw you off, but that's a good starting point. More info:http:// ams.allenpress.com/perlserv/?request=get- abstract&doi=10.1175%2F2008JCLI1956.1 From nwagner at iam.uni-stuttgart.de Fri Oct 2 16:03:46 2009 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 02 Oct 2009 22:03:46 +0200 Subject: [SciPy-User] RuntimeError: FATAL: module compiled aslittle endian Message-ID: Hi all, who can shed some light on the following message RuntimeError: FATAL: module compiled aslittle endian, but detected different endianness at runtime Cheers, Nils From jsseabold at gmail.com Fri Oct 2 16:09:58 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Fri, 2 Oct 2009 16:09:58 -0400 Subject: [SciPy-User] RuntimeError: FATAL: module compiled aslittle endian In-Reply-To: References: Message-ID: On Fri, Oct 2, 2009 at 4:03 PM, Nils Wagner wrote: > Hi all, > > who can shed some light on the following message > > RuntimeError: FATAL: module compiled aslittle endian, but > detected different endianness at runtime > With SciPy or something dependent on it? I ran into this recently after updating and then just deleted the install and the build folder and reinstalled and everything worked. Skipper From gokhansever at gmail.com Fri Oct 2 16:38:51 2009 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Fri, 2 Oct 2009 15:38:51 -0500 Subject: [SciPy-User] Trend Detection In-Reply-To: <49d6b3500910020740u31f21d57p6760f4fefdc2ad88@mail.gmail.com> References: <49d6b3500910020740u31f21d57p6760f4fefdc2ad88@mail.gmail.com> Message-ID: <49d6b3500910021338l33660498ja95300fbd98c3762@mail.gmail.com> I am happy to see some statistical analysis experts here :) At this point I would like ask an important question: Can any of you come here (University of North Dakota) and give a seminar and/or tutorial on this subject as well as covering other statistical analysis techniques available out (preferably centered around Python)? There is a weekly seminars organized in our department (Atmospheric Sciences), and currently there are a few vacancies for the upcoming weeks. After seeing that paper, I have contacted some of the professors in our department asking if someone could give us an introduction in this area. However, we returned empty handed even from the Math department here in the campus. Now, I am thinking this could be a great chance to introduce people into the Scientific Python world, and learning about the deep statistical analyses methods. The drawback is our department doesn't cover travel costs of presenters (It is for the pure joy of science :)) Maybe some help from PSF or another source could alleviate this issue. Any opinions, takers? -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Fri Oct 2 19:18:14 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 3 Oct 2009 01:18:14 +0200 Subject: [SciPy-User] How to create multi-page tiff files with python tools? In-Reply-To: References: Message-ID: <9457e7c80910021618h27be08bdr568ce6b199541021@mail.gmail.com> 2009/9/30 Ralf Gommers : >> I think the problem is that Frederick Lundh is the only one who has >> permission to add/change code base. >> I still find it very suspicious that somewhere on the PIL website it >> states that you can pay (a lot of money) for a "special license" to >> get early ?access to the development version - so even you are >> providing (free) patches via the mailing list, you would have to pay >> to get access to the patched version !? >> A couple months ago I asked for an explanation but didn't get a reply. >> > Yeah that is very odd. An attempt to put the I/O part of PIL in a scikit may > be enough of a push to improve that situation. The only other important > Python library I can think of that was this inert is setuptools, and look > what happened there. I wonder if we shouldn't take the plunge and add OpenImageIO as a dependency? Here's the list of features (from their website): - Extremely simple but powerful?ImageInput?and?ImageOutput?APIs for reading and writing 2D images that is?format agnostic?-- that is, a "client app" doesn't need to know the details about any particular image file formats. Specific formats are implemented by DLL/DSO plugins. - Format plugins for TIFF, JPEG/JFIF, OpenEXR, PNG, HDR/RGBE, Targa, JPEG-2000, BMP, and ICO formats. More coming! The plugins are really good at understanding all the strange corners of the image formats, and are very careful about preserving image metadata (including Exif, GPS, and IPTC data). - An?ImageCache?class that transparently manages a cache so that it can access truly vast amounts of image data (thousands of image files totaling hundreds of GB) very efficiently using only a tiny amount (tens of megabytes at most) of runtime memory. Additionally, a TextureSystem?class provides filtered MIP-map texture lookups, atop the nice caching behavior of ImageCache. - Supported on Linux, OS X, and Windows. All available under the BSD license, so you may modify it and use it in both open source or proprietary apps. I really don't have much hope for PIL. The development process is closed and slow. Once you ignore your community, you are pretty much done for. The only reason PIL still exists is because it is useful, but let's face it: we can easily rewrite 80% of its capabilities at a multi-day sprint. Perhaps we should. Regards St?fan From stefan at sun.ac.za Fri Oct 2 19:19:51 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 3 Oct 2009 01:19:51 +0200 Subject: [SciPy-User] How to create multi-page tiff files with python tools? In-Reply-To: <9457e7c80910021618h27be08bdr568ce6b199541021@mail.gmail.com> References: <9457e7c80910021618h27be08bdr568ce6b199541021@mail.gmail.com> Message-ID: <9457e7c80910021619n19799b41q210717304d6cb748@mail.gmail.com> Apologies, this message was meant for the scikits-image list. Please continue discussions there. 2009/10/3 St?fan van der Walt : > 2009/9/30 Ralf Gommers : >>> I think the problem is that Frederick Lundh is the only one who has >>> permission to add/change code base. >>> I still find it very suspicious that somewhere on the PIL website it >>> states that you can pay (a lot of money) for a "special license" to >>> get early ?access to the development version - so even you are >>> providing (free) patches via the mailing list, you would have to pay >>> to get access to the patched version !? >>> A couple months ago I asked for an explanation but didn't get a reply. >>> >> Yeah that is very odd. An attempt to put the I/O part of PIL in a scikit may >> be enough of a push to improve that situation. The only other important >> Python library I can think of that was this inert is setuptools, and look >> what happened there. > > I wonder if we shouldn't take the plunge and add OpenImageIO as a dependency? > > Here's the list of features (from their website): > > - Extremely simple but powerful?ImageInput?and?ImageOutput?APIs for > reading and writing 2D images that is?format agnostic?-- that is, a > "client app" doesn't need to know the details about any particular > image file formats. Specific formats are implemented by DLL/DSO > plugins. > > - Format plugins for TIFF, JPEG/JFIF, OpenEXR, PNG, HDR/RGBE, Targa, > JPEG-2000, BMP, and ICO formats. More coming! The plugins are really > good at understanding all the strange corners of the image formats, > and are very careful about preserving image metadata (including Exif, > GPS, and IPTC data). > > - An?ImageCache?class that transparently manages a cache so that it > can access truly vast amounts of image data (thousands of image files > totaling hundreds of GB) very efficiently using only a tiny amount > (tens of megabytes at most) of runtime memory. Additionally, a > TextureSystem?class provides filtered MIP-map texture lookups, atop > the nice caching behavior of ImageCache. > > - Supported on Linux, OS X, and Windows. ?All available under the BSD > license, so you may modify it and use it in both open source or > proprietary apps. > > > I really don't have much hope for PIL. ?The development process is > closed and slow. ?Once you ignore your community, you are pretty much > done for. ?The only reason PIL still exists is because it is useful, > but let's face it: we can easily rewrite 80% of its capabilities at a > multi-day sprint. ?Perhaps we should. > > Regards > St?fan > From johannesraja at gmail.com Sun Oct 4 11:36:21 2009 From: johannesraja at gmail.com (johannes rara) Date: Sun, 4 Oct 2009 18:36:21 +0300 Subject: [SciPy-User] Problems installing Scipy on Snow Leopard Message-ID: Hi, First post to this forum. I'm complete python/scipy novice but very eager to learn. I was trying to install scipy on my mac (Mac OS X 10.6.1 Snow Leopard) using guidance from this page http://www.scipy.org/Installing_SciPy/Mac_OS_X When I try to install scipy using this line ~/scipy > LDFLAGS="-arch x86_64" FFLAGS="-arch x86_64" python setup.py build I get the following output: Warning: No configuration returned, assuming unavailable. blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] umfpack_info: libraries umfpack not found in /System/Library/Frameworks/Python.framework/Versions/2.6/lib libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib libraries umfpack not found in /opt/local/lib /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/system_info.py:414: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE Traceback (most recent call last): File "setup.py", line 160, in setup_package() File "setup.py", line 152, in setup_package configuration=configuration ) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/core.py", line 150, in setup config = configuration() File "setup.py", line 118, in configuration config.add_subpackage('scipy') File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 851, in add_subpackage caller_level = 2) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 834, in get_subpackage caller_level = caller_level + 1) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 781, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/setup.py", line 20, in configuration config.add_subpackage('special') File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 851, in add_subpackage caller_level = 2) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 834, in get_subpackage caller_level = caller_level + 1) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 766, in _get_configuration_from_setup_py ('.py', 'U', 1)) File "scipy/special/setup.py", line 7, in from numpy.distutils.misc_util import get_numpy_include_dirs, get_info ImportError: cannot import name get_info I cannot understand what is the problem. Any ideas? I have installed XCode and Unix dev tools (and Numpy). From gokhansever at gmail.com Sun Oct 4 14:20:36 2009 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 4 Oct 2009 13:20:36 -0500 Subject: [SciPy-User] {Chaco} - Minor addition to regression_lasso tool Message-ID: <49d6b3500910041120j7dde9a90m9808c21e33cdba2d@mail.gmail.com> Hello, I have updated RegressionOverlay class to see r_squared using this tool in addition to slope and intercepts. See the regression.py (/examples/basic/) example in action: http://img225.imageshack.us/img225/6646/regressionselection.png Patch is attached. Please review. PS: I also updated http://docs.scipy.org/scipy/docs/scipy.stats.stats.linregress/ and added a simple example. -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: r_squared Type: application/octet-stream Size: 1429 bytes Desc: not available URL: From agile.aspect at gmail.com Sun Oct 4 14:20:56 2009 From: agile.aspect at gmail.com (Agile Aspect) Date: Sun, 4 Oct 2009 11:20:56 -0700 Subject: [SciPy-User] Problems installing Scipy on Snow Leopard In-Reply-To: References: Message-ID: Try removing the LDFLAGS entry from your command line, e.g., ~/scipy > FFLAGS="-arch x86_64" python setup.py build On Sun, Oct 4, 2009 at 8:36 AM, johannes rara wrote: > Hi, > > First post to this forum. I'm complete python/scipy novice but very > eager to learn. I was trying to install scipy on my mac (Mac OS X > 10.6.1 Snow Leopard) using guidance from this page > > http://www.scipy.org/Installing_SciPy/Mac_OS_X > > When I try to install scipy using this line > > ~/scipy > LDFLAGS="-arch x86_64" FFLAGS="-arch x86_64" python setup.py build > > I get the following output: > > Warning: No configuration returned, assuming unavailable. > blas_opt_info: > ?FOUND: > ?extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] > ?define_macros = [('NO_ATLAS_INFO', 3)] > ?extra_compile_args = ['-faltivec', > '-I/System/Library/Frameworks/vecLib.framework/Headers'] > > lapack_opt_info: > ?FOUND: > ?extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] > ?define_macros = [('NO_ATLAS_INFO', 3)] > ?extra_compile_args = ['-faltivec'] > > umfpack_info: > ?libraries umfpack not found in > /System/Library/Frameworks/Python.framework/Versions/2.6/lib > ?libraries umfpack not found in /usr/local/lib > ?libraries umfpack not found in /usr/lib > ?libraries umfpack not found in /opt/local/lib > /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/system_info.py:414: > UserWarning: > ?UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) > ?not found. Directories to search for the libraries can be specified in the > ?numpy/distutils/site.cfg file (section [umfpack]) or by setting > ?the UMFPACK environment variable. > ?warnings.warn(self.notfounderror.__doc__) > ?NOT AVAILABLE > > Traceback (most recent call last): > ?File "setup.py", line 160, in > ?setup_package() > ?File "setup.py", line 152, in setup_package > ?configuration=configuration ) > ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/core.py", > line 150, in setup > ?config = configuration() > ?File "setup.py", line 118, in configuration > ?config.add_subpackage('scipy') > ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", > line 851, in add_subpackage > ?caller_level = 2) > ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", > line 834, in get_subpackage > ?caller_level = caller_level + 1) > ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", > line 781, in _get_configuration_from_setup_py > ?config = setup_module.configuration(*args) > ?File "scipy/setup.py", line 20, in configuration > ?config.add_subpackage('special') > ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", > line 851, in add_subpackage > ?caller_level = 2) > ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", > line 834, in get_subpackage > ?caller_level = caller_level + 1) > ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", > line 766, in _get_configuration_from_setup_py > ?('.py', 'U', 1)) > ?File "scipy/special/setup.py", line 7, in > ?from numpy.distutils.misc_util import get_numpy_include_dirs, get_info > ImportError: cannot import name get_info > > I cannot understand what is the problem. Any ideas? I have installed > XCode and Unix dev tools (and Numpy). > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- "We are drowning in information and starving for knowledge." -- Rutherford D. Roger From johannesraja at gmail.com Sun Oct 4 14:51:10 2009 From: johannesraja at gmail.com (johannes rara) Date: Sun, 4 Oct 2009 21:51:10 +0300 Subject: [SciPy-User] Problems installing Scipy on Snow Leopard In-Reply-To: References: Message-ID: Thans for the response, but I got the same results: ~/scipy > FFLAGS="-arch x86_64" python setup.py build Warning: No configuration returned, assuming unavailable. blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] umfpack_info: libraries umfpack not found in /System/Library/Frameworks/Python.framework/Versions/2.6/lib libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib libraries umfpack not found in /opt/local/lib /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/system_info.py:414: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE Traceback (most recent call last): File "setup.py", line 160, in setup_package() File "setup.py", line 152, in setup_package configuration=configuration ) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/core.py", line 150, in setup config = configuration() File "setup.py", line 118, in configuration config.add_subpackage('scipy') File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 851, in add_subpackage caller_level = 2) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 834, in get_subpackage caller_level = caller_level + 1) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 781, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/setup.py", line 20, in configuration config.add_subpackage('special') File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 851, in add_subpackage caller_level = 2) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 834, in get_subpackage caller_level = caller_level + 1) File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", line 766, in _get_configuration_from_setup_py ('.py', 'U', 1)) File "scipy/special/setup.py", line 7, in from numpy.distutils.misc_util import get_numpy_include_dirs, get_info ImportError: cannot import name get_info ~/scipy > 2009/10/4 Agile Aspect : > Try removing the LDFLAGS entry from your command line, e.g., > > ? ? ?~/scipy > FFLAGS="-arch x86_64" python setup.py build > > > On Sun, Oct 4, 2009 at 8:36 AM, johannes rara wrote: >> Hi, >> >> First post to this forum. I'm complete python/scipy novice but very >> eager to learn. I was trying to install scipy on my mac (Mac OS X >> 10.6.1 Snow Leopard) using guidance from this page >> >> http://www.scipy.org/Installing_SciPy/Mac_OS_X >> >> When I try to install scipy using this line >> >> ~/scipy > LDFLAGS="-arch x86_64" FFLAGS="-arch x86_64" python setup.py build >> >> I get the following output: >> >> Warning: No configuration returned, assuming unavailable. >> blas_opt_info: >> ?FOUND: >> ?extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] >> ?define_macros = [('NO_ATLAS_INFO', 3)] >> ?extra_compile_args = ['-faltivec', >> '-I/System/Library/Frameworks/vecLib.framework/Headers'] >> >> lapack_opt_info: >> ?FOUND: >> ?extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] >> ?define_macros = [('NO_ATLAS_INFO', 3)] >> ?extra_compile_args = ['-faltivec'] >> >> umfpack_info: >> ?libraries umfpack not found in >> /System/Library/Frameworks/Python.framework/Versions/2.6/lib >> ?libraries umfpack not found in /usr/local/lib >> ?libraries umfpack not found in /usr/lib >> ?libraries umfpack not found in /opt/local/lib >> /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/system_info.py:414: >> UserWarning: >> ?UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) >> ?not found. Directories to search for the libraries can be specified in the >> ?numpy/distutils/site.cfg file (section [umfpack]) or by setting >> ?the UMFPACK environment variable. >> ?warnings.warn(self.notfounderror.__doc__) >> ?NOT AVAILABLE >> >> Traceback (most recent call last): >> ?File "setup.py", line 160, in >> ?setup_package() >> ?File "setup.py", line 152, in setup_package >> ?configuration=configuration ) >> ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/core.py", >> line 150, in setup >> ?config = configuration() >> ?File "setup.py", line 118, in configuration >> ?config.add_subpackage('scipy') >> ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", >> line 851, in add_subpackage >> ?caller_level = 2) >> ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", >> line 834, in get_subpackage >> ?caller_level = caller_level + 1) >> ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", >> line 781, in _get_configuration_from_setup_py >> ?config = setup_module.configuration(*args) >> ?File "scipy/setup.py", line 20, in configuration >> ?config.add_subpackage('special') >> ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", >> line 851, in add_subpackage >> ?caller_level = 2) >> ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", >> line 834, in get_subpackage >> ?caller_level = caller_level + 1) >> ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", >> line 766, in _get_configuration_from_setup_py >> ?('.py', 'U', 1)) >> ?File "scipy/special/setup.py", line 7, in >> ?from numpy.distutils.misc_util import get_numpy_include_dirs, get_info >> ImportError: cannot import name get_info >> >> I cannot understand what is the problem. Any ideas? I have installed >> XCode and Unix dev tools (and Numpy). >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > ? ? "We are drowning in information and starving for knowledge." > > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?-- > Rutherford D. Roger > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pgmdevlist at gmail.com Sun Oct 4 15:39:33 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Sun, 4 Oct 2009 15:39:33 -0400 Subject: [SciPy-User] Problems installing Scipy on Snow Leopard In-Reply-To: References: Message-ID: <3ADA15CC-A210-4B11-B868-4D0C655F6778@gmail.com> On Oct 4, 2009, at 2:51 PM, johannes rara wrote: > Thans for the response, but I got the same results: * What Python are you using ? Make sure you use Apple's 2.6.1. * What Numpy are you using ? Make sure you've installed a recent one (not the 1.2.1 that comes with the OS). * What Scipy are you trying to install ? From agile.aspect at gmail.com Sun Oct 4 15:41:22 2009 From: agile.aspect at gmail.com (Agile Aspect) Date: Sun, 4 Oct 2009 12:41:22 -0700 Subject: [SciPy-User] Problems installing Scipy on Snow Leopard In-Reply-To: References: Message-ID: Try python setup.py build The reason I'm suggesting not using the environment variables is because under CentOS 5 and Fedora 9 I couldn't install Scipy when the LDFLAGS was set under either Python 2.5 or Python 2.6. And the web page indicates that under MacOS you "may" have to set the environment variables - it's possible you "may" not have to set them. Also, you might want to post the version of Numpy being used since the last error indicates a potential problem with Numpy (although the message might be a Red Herring.) On Sun, Oct 4, 2009 at 11:51 AM, johannes rara wrote: > Thans for the response, but I got the same results: > > ~/scipy > FFLAGS="-arch x86_64" python setup.py build > Warning: No configuration returned, assuming unavailable. > blas_opt_info: > ?FOUND: > ? ?extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] > ? ?define_macros = [('NO_ATLAS_INFO', 3)] > ? ?extra_compile_args = ['-faltivec', > '-I/System/Library/Frameworks/vecLib.framework/Headers'] > > lapack_opt_info: > ?FOUND: > ? ?extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] > ? ?define_macros = [('NO_ATLAS_INFO', 3)] > ? ?extra_compile_args = ['-faltivec'] > > umfpack_info: > ?libraries umfpack not found in > /System/Library/Frameworks/Python.framework/Versions/2.6/lib > ?libraries umfpack not found in /usr/local/lib > ?libraries umfpack not found in /usr/lib > ?libraries umfpack not found in /opt/local/lib > /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/system_info.py:414: > UserWarning: > ? ?UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) > ? ?not found. Directories to search for the libraries can be specified in the > ? ?numpy/distutils/site.cfg file (section [umfpack]) or by setting > ? ?the UMFPACK environment variable. > ?warnings.warn(self.notfounderror.__doc__) > ?NOT AVAILABLE > > Traceback (most recent call last): > ?File "setup.py", line 160, in > ? ?setup_package() > ?File "setup.py", line 152, in setup_package > ? ?configuration=configuration ) > ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/core.py", > line 150, in setup > ? ?config = configuration() > ?File "setup.py", line 118, in configuration > ? ?config.add_subpackage('scipy') > ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", > line 851, in add_subpackage > ? ?caller_level = 2) > ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", > line 834, in get_subpackage > ? ?caller_level = caller_level + 1) > ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", > line 781, in _get_configuration_from_setup_py > ? ?config = setup_module.configuration(*args) > ?File "scipy/setup.py", line 20, in configuration > ? ?config.add_subpackage('special') > ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", > line 851, in add_subpackage > ? ?caller_level = 2) > ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", > line 834, in get_subpackage > ? ?caller_level = caller_level + 1) > ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", > line 766, in _get_configuration_from_setup_py > ? ?('.py', 'U', 1)) > ?File "scipy/special/setup.py", line 7, in > ? ?from numpy.distutils.misc_util import get_numpy_include_dirs, get_info > ImportError: cannot import name get_info > ~/scipy > > > > 2009/10/4 Agile Aspect : >> Try removing the LDFLAGS entry from your command line, e.g., >> >> ? ? ?~/scipy > FFLAGS="-arch x86_64" python setup.py build >> >> >> On Sun, Oct 4, 2009 at 8:36 AM, johannes rara wrote: >>> Hi, >>> >>> First post to this forum. I'm complete python/scipy novice but very >>> eager to learn. I was trying to install scipy on my mac (Mac OS X >>> 10.6.1 Snow Leopard) using guidance from this page >>> >>> http://www.scipy.org/Installing_SciPy/Mac_OS_X >>> >>> When I try to install scipy using this line >>> >>> ~/scipy > LDFLAGS="-arch x86_64" FFLAGS="-arch x86_64" python setup.py build >>> >>> I get the following output: >>> >>> Warning: No configuration returned, assuming unavailable. >>> blas_opt_info: >>> ?FOUND: >>> ?extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] >>> ?define_macros = [('NO_ATLAS_INFO', 3)] >>> ?extra_compile_args = ['-faltivec', >>> '-I/System/Library/Frameworks/vecLib.framework/Headers'] >>> >>> lapack_opt_info: >>> ?FOUND: >>> ?extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] >>> ?define_macros = [('NO_ATLAS_INFO', 3)] >>> ?extra_compile_args = ['-faltivec'] >>> >>> umfpack_info: >>> ?libraries umfpack not found in >>> /System/Library/Frameworks/Python.framework/Versions/2.6/lib >>> ?libraries umfpack not found in /usr/local/lib >>> ?libraries umfpack not found in /usr/lib >>> ?libraries umfpack not found in /opt/local/lib >>> /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/system_info.py:414: >>> UserWarning: >>> ?UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) >>> ?not found. Directories to search for the libraries can be specified in the >>> ?numpy/distutils/site.cfg file (section [umfpack]) or by setting >>> ?the UMFPACK environment variable. >>> ?warnings.warn(self.notfounderror.__doc__) >>> ?NOT AVAILABLE >>> >>> Traceback (most recent call last): >>> ?File "setup.py", line 160, in >>> ?setup_package() >>> ?File "setup.py", line 152, in setup_package >>> ?configuration=configuration ) >>> ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/core.py", >>> line 150, in setup >>> ?config = configuration() >>> ?File "setup.py", line 118, in configuration >>> ?config.add_subpackage('scipy') >>> ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", >>> line 851, in add_subpackage >>> ?caller_level = 2) >>> ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", >>> line 834, in get_subpackage >>> ?caller_level = caller_level + 1) >>> ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", >>> line 781, in _get_configuration_from_setup_py >>> ?config = setup_module.configuration(*args) >>> ?File "scipy/setup.py", line 20, in configuration >>> ?config.add_subpackage('special') >>> ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", >>> line 851, in add_subpackage >>> ?caller_level = 2) >>> ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", >>> line 834, in get_subpackage >>> ?caller_level = caller_level + 1) >>> ?File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/numpy/distutils/misc_util.py", >>> line 766, in _get_configuration_from_setup_py >>> ?('.py', 'U', 1)) >>> ?File "scipy/special/setup.py", line 7, in >>> ?from numpy.distutils.misc_util import get_numpy_include_dirs, get_info >>> ImportError: cannot import name get_info >>> >>> I cannot understand what is the problem. Any ideas? I have installed >>> XCode and Unix dev tools (and Numpy). >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> >> >> -- >> ? ? "We are drowning in information and starving for knowledge." >> >> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?-- >> Rutherford D. Roger >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- "We are drowning in information and starving for knowledge." -- Rutherford D. Roger From cool-rr at cool-rr.com Sun Oct 4 16:19:29 2009 From: cool-rr at cool-rr.com (cool-RR) Date: Sun, 4 Oct 2009 22:19:29 +0200 Subject: [SciPy-User] Simulations Message-ID: Hello, This is not directly related to SciPy; I'm posting it here because I figure that there may be people here who know the scientific computing world enough to help me with my question. I've been working on an open-source scientific computing project for about 6 months now, and I've come to the conclusion that it's about time to find other users except myself for it, so I may get valuable feedback about which direction I should be taking this project. The project is called GarlicSim (http://garlicsim.com). It's a Pythonic platform for working with simulations. You may read more about it on the webpage. In short, it's a very general framework for creating, running and analyzing simulations. It's not specific to any scientific field; Its role is to provide a general mold into which all simulations can be cast. If you want to know more about it you can also read a (yet-incomplete) introduction to it. So what I want to know is, who would be good potential first users for this, and how could I reach them? I'm not even sure which scientific field I would like to target, so please suggest. Thanks, Ram Rachum -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sun Oct 4 19:29:35 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 5 Oct 2009 08:29:35 +0900 Subject: [SciPy-User] Problems installing Scipy on Snow Leopard In-Reply-To: References: Message-ID: <5b8d13220910041629g423b45d7yf68c54737f7f9fce@mail.gmail.com> On Mon, Oct 5, 2009 at 12:36 AM, johannes rara wrote: > > I cannot understand what is the problem. Any ideas? I have installed > XCode and Unix dev tools (and Numpy). you should have a recent version of numpy (few days max) to compile the last dev version of scipy. the missing get_info function has been added after numpy 1.3. cheers, David From seb.haase at gmail.com Mon Oct 5 03:37:24 2009 From: seb.haase at gmail.com (Sebastian Haase) Date: Mon, 5 Oct 2009 09:37:24 +0200 Subject: [SciPy-User] Simulations In-Reply-To: References: Message-ID: On Sun, Oct 4, 2009 at 10:19 PM, cool-RR wrote: > Hello, > This is not directly related to SciPy; I'm posting it here because I figure > that there may be people here who know the scientific computing world enough > to help me with my question. > I've been working on an open-source?scientific?computing project for about 6 > months now, and I've come to the conclusion that it's about time to find > other users except myself for it, so I may get valuable feedback about which > direction I should be taking this project. > The project is called GarlicSim (http://garlicsim.com). It's a Pythonic > platform for working with simulations. You may read more about it on the > webpage. In short, it's a very general framework for creating, running and > analyzing simulations. It's not specific to any scientific field; Its role > is to provide a general mold into which all simulations can be cast. If you > want to know more about it you can also read a?(yet-incomplete) > introduction?to it. > So what I want to know is, who would be good potential first users for this, > and how could I reach them? > I'm not even sure which scientific field I would like to target, so please > suggest. > > Thanks, > Ram Rachum Hi Ram, funny enough - my to do list for today is to implement a discrete (checker board kind-of) simulation (somewhat similar to the game of life, if your will. But before I start looking into your stuff, few questions: Why is the web site a ".com" ? What is the license ? Is the documentation written in MS-Word !?!? Cheers Sebastian Haase From ralf.gommers at googlemail.com Mon Oct 5 04:01:02 2009 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 5 Oct 2009 10:01:02 +0200 Subject: [SciPy-User] a small example of scipy.ndimage.map_coordinates In-Reply-To: <74bd6125-954d-4ae3-bef3-791a81b0fb67@y21g2000yqn.googlegroups.com> References: <74bd6125-954d-4ae3-bef3-791a81b0fb67@y21g2000yqn.googlegroups.com> Message-ID: Hi, did you send this to the list because you want to add it to the docs (like here http://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html) ? Would you like access to the documentation wiki and add it yourself? In that case go to http://docs.scipy.org/numpy/accounts/register/ and let us know your username so you can get edit permissions. Cheers, Ralf On Fri, Oct 2, 2009 at 6:52 PM, denis wrote: > Folks, > here is a small tutorial example of scipy.ndimage.map_coordinates: > > Say Cities is an n x 2 array of [latitide,longitude] coordinates, like > Paris = [48.9, 2.4] > Rome = [41.9, 12.5] > Greenwich = [51.5, 0] > Cities = np.array([ Paris, Rome, Greenwich ]) > > and A is a 91 x 360 array of temperatures at integer [lat,long] -- > A[0] along the equator, A[:,0] along the prime meridian through > Greenwich. > Then > > ................................................................................ > z = scipy.ndimage.map_coordinates( A, Cities.T, order=order ) > > ................................................................................ > > is the 3 temperatures at Paris, Rome and Greenwich -- approximately, > depending on order. > The transpose Cities.T is used because map_coordinates takes columns, > not rows. > ("RuntimeError: invalid shape for coordinate array" > may mean that you forgot the .T .) > > If order is 0, map_coordinates rounds [lat,long] to the nearest > integers: the temperature at Paris is approximated by A[50,2]. > If 1, it does bilinear interpolation in the square with corners > A[48,2], A[48,3], A[49,2], A[49,3] for Paris. > If 2, it does quadratic interpolation over the 9 points A[48:51, 1:4]. > And so on, up to order 5; the default is order=3 (Catmull-Rom ?) > Order 1, bilinear, is much faster than 2 or 3. > > What happens to A[51,-1] etc. west of Greenwich ? See the mode= > option. > > Of course the values in A may be arrays -- colors, sounds, anything > that can be blended or interpolated -- not just scalars. > > Links: > > http://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html > http://www.scipy.org/Cookbook/Interpolation > http://en.wikipedia.org/wiki/Multivariate_interpolation ff. > > For an introduction to interpolation methods, see ... NR ? > > For the reverse problem of turning scattered data to a regular grid, > see > > http://matplotlib.sourceforge.net/api/mlab_api.html#matplotlib.mlab.griddata > . > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cool-rr at cool-rr.com Mon Oct 5 05:35:02 2009 From: cool-rr at cool-rr.com (cool-RR) Date: Mon, 5 Oct 2009 11:35:02 +0200 Subject: [SciPy-User] Simulations In-Reply-To: References: Message-ID: Hey Sebastian, About the .com thing, you're right, I'll be registering .org as well (They will all point to GitHub for now, when there will be a site the .org will be main.) The license is LGPL, meaning in short that the main restriction is that you're gonna have to state somewhere when you release your program that it uses GarlicSim under the hood. The introduction is written in Word, yes. If you have a suggestion for a better format, I might change to that. If you need help with using GarlicSim let me know and I'll help you. Ram. 2009/10/5 Sebastian Haase > On Sun, Oct 4, 2009 at 10:19 PM, cool-RR wrote: > > Hello, > > This is not directly related to SciPy; I'm posting it here because I > figure > > that there may be people here who know the scientific computing world > enough > > to help me with my question. > > I've been working on an open-source scientific computing project for > about 6 > > months now, and I've come to the conclusion that it's about time to find > > other users except myself for it, so I may get valuable feedback about > which > > direction I should be taking this project. > > The project is called GarlicSim (http://garlicsim.com). It's a Pythonic > > platform for working with simulations. You may read more about it on the > > webpage. In short, it's a very general framework for creating, running > and > > analyzing simulations. It's not specific to any scientific field; Its > role > > is to provide a general mold into which all simulations can be cast. If > you > > want to know more about it you can also read a (yet-incomplete) > > introduction to it. > > So what I want to know is, who would be good potential first users for > this, > > and how could I reach them? > > I'm not even sure which scientific field I would like to target, so > please > > suggest. > > > > Thanks, > > Ram Rachum > > Hi Ram, > funny enough - my to do list for today is to implement a discrete > (checker board kind-of) simulation (somewhat similar to the game of > life, if your will. > But before I start looking into your stuff, few questions: > Why is the web site a ".com" ? > What is the license ? > Is the documentation written in MS-Word !?!? > > Cheers > Sebastian Haase > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Sincerely, Ram Rachum -------------- next part -------------- An HTML attachment was scrubbed... URL: From pearu.peterson at gmail.com Mon Oct 5 06:47:33 2009 From: pearu.peterson at gmail.com (Pearu Peterson) Date: Mon, 05 Oct 2009 13:47:33 +0300 Subject: [SciPy-User] ANN: a journal paper about F2PY has been published Message-ID: <4AC9CEC5.6050705@cens.ioc.ee> -------- Original Message -------- Subject: [f2py] ANN: a journal paper about F2PY has been published Date: Mon, 05 Oct 2009 11:52:20 +0300 From: Pearu Peterson Reply-To: For users of the f2py program To: For users of the f2py program Hi, A journal paper about F2PY has been published in International Journal of Computational Science and Engineering: Peterson, P. (2009) 'F2PY: a tool for connecting Fortran and Python programs', Int. J. Computational Science and Engineering. Vol.4, No. 4, pp.296-305. So, if you would like to cite F2PY in a paper or presentation, using this reference is recommended. Interscience Publishers will update their web pages with the new journal number within few weeks. A softcopy of the article available in my homepage: http://cens.ioc.ee/~pearu/papers/IJCSE4.4_Paper_8.pdf Best regards, Pearu _______________________________________________ f2py-users mailing list f2py-users at cens.ioc.ee http://cens.ioc.ee/mailman/listinfo/f2py-users From denis-bz-gg at t-online.de Mon Oct 5 06:55:01 2009 From: denis-bz-gg at t-online.de (denis) Date: Mon, 5 Oct 2009 03:55:01 -0700 (PDT) Subject: [SciPy-User] a small example of scipy.ndimage.map_coordinates In-Reply-To: References: <74bd6125-954d-4ae3-bef3-791a81b0fb67@y21g2000yqn.googlegroups.com> Message-ID: <7c21d153-13b0-41d9-8295-9805dc543f12@k41g2000vbt.googlegroups.com> On Oct 5, 10:01 am, Ralf Gommers wrote: > Hi, did you send this to the list because you want to add it to the docs > (like herehttp://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html) > ? I was hoping for comments first: is it readable, accurate, the right level for anybody ? (Unreviewed doc is the curse of the web. My imaginary bandwagon for BetterDoc includes better indexing / tagging and examples of good doc in various categories.) How about http://advice.mechanicalkern.com ? looks to be in the right direction. cheers -- denis From seb.haase at gmail.com Mon Oct 5 08:04:44 2009 From: seb.haase at gmail.com (Sebastian Haase) Date: Mon, 5 Oct 2009 14:04:44 +0200 Subject: [SciPy-User] Simulations In-Reply-To: References: Message-ID: Hi Ram, thanks for the clarification - one more question: is your code entirely in Python ? Isn't that quite slow ? -Sebastian Haase On Mon, Oct 5, 2009 at 11:35 AM, cool-RR wrote: > Hey Sebastian, > About the .com thing, you're right, I'll be registering .org as well (They > will all point to GitHub for now, when there will be a site the .org will be > main.) > The license is LGPL, meaning in short that the main restriction is that > you're gonna have to state somewhere when you release your program that it > uses GarlicSim under the hood. > The introduction is written in Word, yes. If you have a suggestion for a > better format, I might change to that. > If you need help with using GarlicSim let me know and I'll help you. > Ram. > 2009/10/5 Sebastian Haase >> >> On Sun, Oct 4, 2009 at 10:19 PM, cool-RR wrote: >> > Hello, >> > This is not directly related to SciPy; I'm posting it here because I >> > figure >> > that there may be people here who know the scientific computing world >> > enough >> > to help me with my question. >> > I've been working on an open-source?scientific?computing project for >> > about 6 >> > months now, and I've come to the conclusion that it's about time to find >> > other users except myself for it, so I may get valuable feedback about >> > which >> > direction I should be taking this project. >> > The project is called GarlicSim (http://garlicsim.com). It's a Pythonic >> > platform for working with simulations. You may read more about it on the >> > webpage. In short, it's a very general framework for creating, running >> > and >> > analyzing simulations. It's not specific to any scientific field; Its >> > role >> > is to provide a general mold into which all simulations can be cast. If >> > you >> > want to know more about it you can also read a?(yet-incomplete) >> > introduction?to it. >> > So what I want to know is, who would be good potential first users for >> > this, >> > and how could I reach them? >> > I'm not even sure which scientific field I would like to target, so >> > please >> > suggest. >> > >> > Thanks, >> > Ram Rachum >> >> Hi Ram, >> funny enough - my to do list for today is to implement a discrete >> (checker board kind-of) simulation (somewhat similar to the game of >> life, if your will. >> But before ?I start looking into your stuff, few questions: >> Why is the web site a ".com" ? >> What is the license ? >> Is the documentation written in MS-Word !?!? >> >> Cheers >> Sebastian Haase > > > -- > Sincerely, > Ram Rachum > From cool-rr at cool-rr.com Mon Oct 5 08:30:59 2009 From: cool-rr at cool-rr.com (cool-RR) Date: Mon, 5 Oct 2009 14:30:59 +0200 Subject: [SciPy-User] Simulations In-Reply-To: References: Message-ID: The code is entirely in Python. But the code that does all the heavy lifting has to be supplied by the "simulation package", i.e. you. The intro explains about this. On Mon, Oct 5, 2009 at 2:04 PM, Sebastian Haase wrote: > Hi Ram, > thanks for the clarification - > one more question: is your code entirely in Python ? Isn't that quite slow > ? > > -Sebastian Haase > > > On Mon, Oct 5, 2009 at 11:35 AM, cool-RR wrote: > > Hey Sebastian, > > About the .com thing, you're right, I'll be registering .org as well > (They > > will all point to GitHub for now, when there will be a site the .org will > be > > main.) > > The license is LGPL, meaning in short that the main restriction is that > > you're gonna have to state somewhere when you release your program that > it > > uses GarlicSim under the hood. > > The introduction is written in Word, yes. If you have a suggestion for a > > better format, I might change to that. > > If you need help with using GarlicSim let me know and I'll help you. > > Ram. > > 2009/10/5 Sebastian Haase > >> > >> On Sun, Oct 4, 2009 at 10:19 PM, cool-RR wrote: > >> > Hello, > >> > This is not directly related to SciPy; I'm posting it here because I > >> > figure > >> > that there may be people here who know the scientific computing world > >> > enough > >> > to help me with my question. > >> > I've been working on an open-source scientific computing project for > >> > about 6 > >> > months now, and I've come to the conclusion that it's about time to > find > >> > other users except myself for it, so I may get valuable feedback about > >> > which > >> > direction I should be taking this project. > >> > The project is called GarlicSim (http://garlicsim.com). It's a > Pythonic > >> > platform for working with simulations. You may read more about it on > the > >> > webpage. In short, it's a very general framework for creating, running > >> > and > >> > analyzing simulations. It's not specific to any scientific field; Its > >> > role > >> > is to provide a general mold into which all simulations can be cast. > If > >> > you > >> > want to know more about it you can also read a (yet-incomplete) > >> > introduction to it. > >> > So what I want to know is, who would be good potential first users for > >> > this, > >> > and how could I reach them? > >> > I'm not even sure which scientific field I would like to target, so > >> > please > >> > suggest. > >> > > >> > Thanks, > >> > Ram Rachum > >> > >> Hi Ram, > >> funny enough - my to do list for today is to implement a discrete > >> (checker board kind-of) simulation (somewhat similar to the game of > >> life, if your will. > >> But before I start looking into your stuff, few questions: > >> Why is the web site a ".com" ? > >> What is the license ? > >> Is the documentation written in MS-Word !?!? > >> > >> Cheers > >> Sebastian Haase > > > > > > -- > > Sincerely, > > Ram Rachum > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Sincerely, Ram Rachum -------------- next part -------------- An HTML attachment was scrubbed... URL: From johannesraja at gmail.com Mon Oct 5 10:06:31 2009 From: johannesraja at gmail.com (johannes rara) Date: Mon, 5 Oct 2009 17:06:31 +0300 Subject: [SciPy-User] Problems installing Scipy on Snow Leopard In-Reply-To: <5b8d13220910041629g423b45d7yf68c54737f7f9fce@mail.gmail.com> References: <5b8d13220910041629g423b45d7yf68c54737f7f9fce@mail.gmail.com> Message-ID: Ok, thanks for you response. My numpy version is 1.2.1 How can I update this to newer one? 2009/10/5 David Cournapeau : > On Mon, Oct 5, 2009 at 12:36 AM, johannes rara wrote: > >> >> I cannot understand what is the problem. Any ideas? I have installed >> XCode and Unix dev tools (and Numpy). > > you should have a recent version of numpy (few days max) to compile > the last dev version of scipy. the missing get_info function has been > added after numpy 1.3. > > cheers, > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From aisaac at american.edu Mon Oct 5 10:09:15 2009 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 05 Oct 2009 10:09:15 -0400 Subject: [SciPy-User] Simulations In-Reply-To: References: Message-ID: <4AC9FE0B.5000208@american.edu> On 10/5/2009 3:37 AM, Sebastian Haase wrote: > my to do list for today is to implement a discrete > (checker board kind-of) simulation http://econpy.googlecode.com/svn/trunk/abm/gridworld/gridworld.py License: MIT hth, Alan Isaac PS Documentation is only docstrings, because it is simple, but I can send you examples of use. From tiago at forked.de Mon Oct 5 12:22:21 2009 From: tiago at forked.de (Tiago de Paula Peixoto) Date: Mon, 05 Oct 2009 18:22:21 +0200 Subject: [SciPy-User] ANN: graph-tool, a package for graph analysis and manipulation Message-ID: Hi, This is a shameless plug for graph-tool, a python module for manipulation and statistical analysis of graphs. It is aimed at the so-called "network research", and contains many algorithms useful in this area, such as correlations, clustering, community detection, maximum flow, spectral properties, betweeness, pagerank, etc. It is written primarily in C++ with the Boost Graph Library, so it should scale quite well performance-wise when large graphs are used. The interface is of course in Python, and it integrates very well with scipy/numpy. More information, including the complete documentation, is available at: http://graph-tool.forked.de Cheers, Tiago -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 261 bytes Desc: OpenPGP digital signature URL: From jrennie at gmail.com Mon Oct 5 12:56:27 2009 From: jrennie at gmail.com (Jason Rennie) Date: Mon, 5 Oct 2009 12:56:27 -0400 Subject: [SciPy-User] optimize.fmin_cg terminates when w - grad*1e-10 yields lower obj & grad Message-ID: <75c31b2a0910050956y4ec3154fvfa02e5faebb06c48@mail.gmail.com> The low-down: - "Warning: Desired error not necessarily achieved due to precision loss" - I'm passing objective (obj) and gradient (grad) - I checked that obj and grad are correct using my python equivalent of http://people.csail.mit.edu/jrennie/matlab/checkgrad2.m - I have the same problem whether I use norm=2 or no norm argument - Termination objective and 2-norm of grad are 2.484517e+06, 2.644732e+07 - Subtracting grad*1e-10 to parameter vector yields 2.417658e+06, 2.413900e+07 obj and 2-norm of grad, respectively I did an implementation of CG in matlab/octave a few years ago and realize that the problem could be as simple as me needing to set a different epsilon value or some such. Any suggestions? Nothing jumped out at me when I gave a careful read to the argument list and glanced over the code, but I could easily be missing something. My current call: wopt = scipy.optimize.fmin_cg(f = ser.obj, fprime = ser.grad, x0 = w0, norm = 2, callback = cb) OTOH, is it possible that fmin_cg needs additional tuning? I don't have much understanding of how solid the fmin_cg code is. Has it seen tons of use/testing, or is it relatively fresh code? FYI, I'm using 0.7.0---the version that comes with the current Ubuntu. My parameter vector is length 12; I have ~50 data points. I've seen CG work quite nicely on data of a million dimensions... Thanks, Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.mientki at ru.nl Mon Oct 5 13:19:20 2009 From: s.mientki at ru.nl (Stef Mientki) Date: Mon, 05 Oct 2009 19:19:20 +0200 Subject: [SciPy-User] Simulations In-Reply-To: References: Message-ID: <4ACA2A98.2040402@ru.nl> hi Ram, looks interesting, too bad it doesn't run with python 2.5 :-( good luck, Stef cool-RR wrote: > Hello, > > This is not directly related to SciPy; I'm posting it here because I > figure that there may be people here who know the scientific computing > world enough to help me with my question. > > I've been working on an open-source scientific computing project for > about 6 months now, and I've come to the conclusion that it's about > time to find other users except myself for it, so I may get valuable > feedback about which direction I should be taking this project. > > The project is called GarlicSim (http://garlicsim.com > ). It's a Pythonic platform for working with > simulations. You may read more about it on the webpage. In short, it's > a very general framework for creating, running and analyzing > simulations. It's not specific to any scientific field; Its role is to > provide a general mold into which all simulations can be cast. If you > want to know more about it you can also read a (yet-incomplete) > introduction > to > it. > > So what I want to know is, who would be good potential first users for > this, and how could I reach them? > I'm not even sure which scientific field I would like to target, so > please suggest. > > > Thanks, > Ram Rachum > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Oct 5 13:40:01 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 5 Oct 2009 13:40:01 -0400 Subject: [SciPy-User] optimize.fmin_cg terminates when w - grad*1e-10 yields lower obj & grad In-Reply-To: <75c31b2a0910050956y4ec3154fvfa02e5faebb06c48@mail.gmail.com> References: <75c31b2a0910050956y4ec3154fvfa02e5faebb06c48@mail.gmail.com> Message-ID: <1cd32cbb0910051040l3d58edbao70cdb3006fdf984e@mail.gmail.com> On Mon, Oct 5, 2009 at 12:56 PM, Jason Rennie wrote: > The low-down: > > "Warning: Desired error not necessarily achieved due to precision loss" > I'm passing objective (obj) and gradient (grad) > I checked that obj and grad are correct using my python equivalent > of?http://people.csail.mit.edu/jrennie/matlab/checkgrad2.m > I have the same problem whether I use norm=2 or no norm argument > Termination objective and 2-norm of grad are 2.484517e+06, 2.644732e+07 > Subtracting grad*1e-10 to parameter vector yields 2.417658e+06, 2.413900e+07 > obj and 2-norm of grad, respectively > > I did an implementation of CG in matlab/octave a few years ago and realize > that the problem could be as simple as me needing to set a different epsilon > value or some such. ?Any suggestions? ?Nothing jumped out at me when I gave > a careful read to the argument list and glanced over the code, but I could > easily be missing something. ?My current call: > wopt = scipy.optimize.fmin_cg(f = ser.obj, fprime = ser.grad, x0 = w0, norm > = 2, callback = cb) Does lowering gtol help? e.g. gtol=1e-10 I would keep using norm=inf to force more iterations. Josef > OTOH, is it possible that fmin_cg needs additional tuning? ?I don't have > much understanding of how solid the fmin_cg code is. ?Has it seen tons of > use/testing, or is it relatively fresh code? > FYI, I'm using 0.7.0---the version that comes with the current Ubuntu. ?My > parameter vector is length 12; I have ~50 data points. ?I've seen CG work > quite nicely on data of a million dimensions... > Thanks, > Jason > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From cool-rr at cool-rr.com Mon Oct 5 13:46:24 2009 From: cool-rr at cool-rr.com (cool-RR) Date: Mon, 5 Oct 2009 19:46:24 +0200 Subject: [SciPy-User] Simulations In-Reply-To: <4AC9FE0B.5000208@american.edu> References: <4AC9FE0B.5000208@american.edu> Message-ID: I think there's been some confusion here: the docstrings in GarlicSim are standard Python strings. Only the structured introduction is a Word document. On 10/5/09, Alan G Isaac wrote: > On 10/5/2009 3:37 AM, Sebastian Haase wrote: >> my to do list for today is to implement a discrete >> (checker board kind-of) simulation > > > http://econpy.googlecode.com/svn/trunk/abm/gridworld/gridworld.py > License: MIT > > hth, > Alan Isaac > > PS Documentation is only docstrings, because it is simple, > but I can send you examples of use. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Sent from my mobile device Sincerely, Ram Rachum From sebastian.walter at gmail.com Mon Oct 5 13:48:04 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Mon, 5 Oct 2009 19:48:04 +0200 Subject: [SciPy-User] optimize.fmin_cg terminates when w - grad*1e-10 yields lower obj & grad In-Reply-To: <75c31b2a0910050956y4ec3154fvfa02e5faebb06c48@mail.gmail.com> References: <75c31b2a0910050956y4ec3154fvfa02e5faebb06c48@mail.gmail.com> Message-ID: In 90% of all cases when an optimization fails the user provided the wrong gradient. |g| = 2.644732e+07 at the termination point clearly points in that direction. Maybe you provided -gradient instead of gradient? It is also possible that you simply cannot identify your 12 parameters with the 50 measurements you made. Then you'll have to improve your modeling resp. get more measurements. On Mon, Oct 5, 2009 at 6:56 PM, Jason Rennie wrote: > The low-down: > > "Warning: Desired error not necessarily achieved due to precision loss" > I'm passing objective (obj) and gradient (grad) > I checked that obj and grad are correct using my python equivalent > of?http://people.csail.mit.edu/jrennie/matlab/checkgrad2.m > I have the same problem whether I use norm=2 or no norm argument > Termination objective and 2-norm of grad are 2.484517e+06, 2.644732e+07 > Subtracting grad*1e-10 to parameter vector yields 2.417658e+06, 2.413900e+07 > obj and 2-norm of grad, respectively > > I did an implementation of CG in matlab/octave a few years ago and realize > that the problem could be as simple as me needing to set a different epsilon > value or some such. ?Any suggestions? ?Nothing jumped out at me when I gave > a careful read to the argument list and glanced over the code, but I could > easily be missing something. ?My current call: > wopt = scipy.optimize.fmin_cg(f = ser.obj, fprime = ser.grad, x0 = w0, norm > = 2, callback = cb) > OTOH, is it possible that fmin_cg needs additional tuning? ?I don't have > much understanding of how solid the fmin_cg code is. ?Has it seen tons of > use/testing, or is it relatively fresh code? > FYI, I'm using 0.7.0---the version that comes with the current Ubuntu. ?My > parameter vector is length 12; I have ~50 data points. ?I've seen CG work > quite nicely on data of a million dimensions... > Thanks, > Jason > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From cool-rr at cool-rr.com Mon Oct 5 13:48:30 2009 From: cool-rr at cool-rr.com (cool-RR) Date: Mon, 5 Oct 2009 19:48:30 +0200 Subject: [SciPy-User] Simulations In-Reply-To: <4ACA2A98.2040402@ru.nl> References: <4ACA2A98.2040402@ru.nl> Message-ID: It doesn't run on Python 2.5 because it makes use of the multiprocessing module (from 2.6) in order to take advantage of multiple cores for simulation crunching. On 10/5/09, Stef Mientki wrote: > hi Ram, > > looks interesting, > too bad it doesn't run with python 2.5 :-( > > good luck, > Stef > > cool-RR wrote: >> Hello, >> >> This is not directly related to SciPy; I'm posting it here because I >> figure that there may be people here who know the scientific computing >> world enough to help me with my question. >> >> I've been working on an open-source scientific computing project for >> about 6 months now, and I've come to the conclusion that it's about >> time to find other users except myself for it, so I may get valuable >> feedback about which direction I should be taking this project. >> >> The project is called GarlicSim (http://garlicsim.com >> ). It's a Pythonic platform for working with >> simulations. You may read more about it on the webpage. In short, it's >> a very general framework for creating, running and analyzing >> simulations. It's not specific to any scientific field; Its role is to >> provide a general mold into which all simulations can be cast. If you >> want to know more about it you can also read a (yet-incomplete) >> introduction >> to >> it. >> >> So what I want to know is, who would be good potential first users for >> this, and how could I reach them? >> I'm not even sure which scientific field I would like to target, so >> please suggest. >> >> >> Thanks, >> Ram Rachum >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > -- Sent from my mobile device Sincerely, Ram Rachum From robert.kern at gmail.com Mon Oct 5 13:52:07 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 5 Oct 2009 12:52:07 -0500 Subject: [SciPy-User] Simulations In-Reply-To: References: <4ACA2A98.2040402@ru.nl> Message-ID: <3d375d730910051052k70eb55b4i206ba5ed9c8c33be@mail.gmail.com> On Mon, Oct 5, 2009 at 12:48, cool-RR wrote: > It doesn't run on Python 2.5 because it makes use of the > multiprocessing module (from 2.6) in order to take advantage of > multiple cores for simulation crunching. multiprocessing has been backported as an installable package for Python 2.5. You can simply require that package for 2.5 users if that is the only 2.6 feature that you use. http://pypi.python.org/pypi/multiprocessing -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jrennie at gmail.com Mon Oct 5 13:53:09 2009 From: jrennie at gmail.com (Jason Rennie) Date: Mon, 5 Oct 2009 13:53:09 -0400 Subject: [SciPy-User] optimize.fmin_cg terminates when w - grad*1e-10 yields lower obj & grad In-Reply-To: <1cd32cbb0910051040l3d58edbao70cdb3006fdf984e@mail.gmail.com> References: <75c31b2a0910050956y4ec3154fvfa02e5faebb06c48@mail.gmail.com> <1cd32cbb0910051040l3d58edbao70cdb3006fdf984e@mail.gmail.com> Message-ID: <75c31b2a0910051053h6c1b383cn3ecda47bc12259cf@mail.gmail.com> On Mon, Oct 5, 2009 at 1:40 PM, wrote: > Does lowering gtol help? e.g. gtol=1e-10 > I would keep using norm=inf to force more iterations. > Thanks for the suggestion, but no, it does not seem to help. I get the same exact behavior with no norm argument (default is Inf) and a gtol=1e-10 argument. Jason -- Jason Rennie Research Scientist, ITA Software 617-714-2645 http://www.itasoftware.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cool-rr at cool-rr.com Mon Oct 5 13:57:51 2009 From: cool-rr at cool-rr.com (cool-RR) Date: Mon, 5 Oct 2009 19:57:51 +0200 Subject: [SciPy-User] Simulations In-Reply-To: <3d375d730910051052k70eb55b4i206ba5ed9c8c33be@mail.gmail.com> References: <4ACA2A98.2040402@ru.nl> <3d375d730910051052k70eb55b4i206ba5ed9c8c33be@mail.gmail.com> Message-ID: Good idea, I'll do that. On 10/5/09, Robert Kern wrote: > On Mon, Oct 5, 2009 at 12:48, cool-RR wrote: >> It doesn't run on Python 2.5 because it makes use of the >> multiprocessing module (from 2.6) in order to take advantage of >> multiple cores for simulation crunching. > > multiprocessing has been backported as an installable package for > Python 2.5. You can simply require that package for 2.5 users if that > is the only 2.6 feature that you use. > > http://pypi.python.org/pypi/multiprocessing > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Sent from my mobile device Sincerely, Ram Rachum From agile.aspect at gmail.com Mon Oct 5 14:26:26 2009 From: agile.aspect at gmail.com (Agile Aspect) Date: Mon, 5 Oct 2009 11:26:26 -0700 Subject: [SciPy-User] Problems installing Scipy on Snow Leopard In-Reply-To: References: <5b8d13220910041629g423b45d7yf68c54737f7f9fce@mail.gmail.com> Message-ID: I can tell you where the software is but I can't tell you how to install on a MacOS: http://sourceforge.net/projects/numpy/files/NumPy/1.3.0/numpy-1.3.0-py2.6-macosx10.5.dmg/download On Mon, Oct 5, 2009 at 7:06 AM, johannes rara wrote: > Ok, thanks for you response. My numpy version is > > 1.2.1 > > How can I update this to newer one? > > 2009/10/5 David Cournapeau : >> On Mon, Oct 5, 2009 at 12:36 AM, johannes rara wrote: >> >>> >>> I cannot understand what is the problem. Any ideas? I have installed >>> XCode and Unix dev tools (and Numpy). >> >> you should have a recent version of numpy (few days max) to compile >> the last dev version of scipy. the missing get_info function has been >> added after numpy 1.3. >> >> cheers, >> >> David >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- "We are drowning in information and starving for knowledge." -- Rutherford D. Roger From josef.pktd at gmail.com Mon Oct 5 14:35:23 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 5 Oct 2009 14:35:23 -0400 Subject: [SciPy-User] optimize.fmin_cg terminates when w - grad*1e-10 yields lower obj & grad In-Reply-To: <75c31b2a0910051053h6c1b383cn3ecda47bc12259cf@mail.gmail.com> References: <75c31b2a0910050956y4ec3154fvfa02e5faebb06c48@mail.gmail.com> <1cd32cbb0910051040l3d58edbao70cdb3006fdf984e@mail.gmail.com> <75c31b2a0910051053h6c1b383cn3ecda47bc12259cf@mail.gmail.com> Message-ID: <1cd32cbb0910051135y8ee22eaq9900bb795f783cab@mail.gmail.com> On Mon, Oct 5, 2009 at 1:53 PM, Jason Rennie wrote: > On Mon, Oct 5, 2009 at 1:40 PM, wrote: >> >> Does lowering gtol help? ?e.g. gtol=1e-10 >> I would keep using norm=inf ?to force more iterations. > > Thanks for the suggestion, but no, it does not seem to help. ?I get the same > exact behavior with no norm argument (default is Inf) and a gtol=1e-10 > argument. > Jason In this case, I either agree with Sebastian, or you are already at a minimum up to the usual precision, or your problem is badly scaled (my next guesses). In your initial example, you had an improvement with a stepsize 1e-10, however fmin_cg has a minimum stepsize of amin = 1e-8 (hardcoded) in linesearch if my very fast skimming of the code is correct. I don't know about fmin_cg but, if I remember correctly, I got this return code with other minimizers when I had an (almost) perfect fit, with no noise in the simulation. Otherwise, you could try to rescale your problem to make the parameters larger in absolute value. Josef > -- > Jason Rennie > Research Scientist, ITA Software > 617-714-2645 > http://www.itasoftware.com/ > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From pgmdevlist at gmail.com Mon Oct 5 14:38:04 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 5 Oct 2009 14:38:04 -0400 Subject: [SciPy-User] Problems installing Scipy on Snow Leopard In-Reply-To: References: <5b8d13220910041629g423b45d7yf68c54737f7f9fce@mail.gmail.com> Message-ID: <18C37AC9-E931-4BDA-8DFA-29DE0D5CD492@gmail.com> On Oct 5, 2009, at 2:26 PM, Agile Aspect wrote: > I can tell you where the software is but I can't tell you how to > install on a MacOS: > > http://sourceforge.net/projects/numpy/files/NumPy/1.3.0/numpy-1.3.0-py2.6-macosx10.5.dmg/download My 2c: Install numpy from sources. Far easier to debug that way, and you're sure that your numpy is 64b. Before anything else, get a fortran compiler here : http://r.research.att.com/tools/ Installing the sources from scratch (that is, by running `python setup.py install`) should be strightforward, provided you stick with the Python that ships with Snow Leopard (that point blocked me for a while). If you should run into problems, just drop a line. From jrennie at gmail.com Mon Oct 5 14:38:48 2009 From: jrennie at gmail.com (Jason Rennie) Date: Mon, 5 Oct 2009 14:38:48 -0400 Subject: [SciPy-User] optimize.fmin_cg terminates when w - grad*1e-10 yields lower obj & grad In-Reply-To: References: <75c31b2a0910050956y4ec3154fvfa02e5faebb06c48@mail.gmail.com> Message-ID: <75c31b2a0910051138s510ce7c5w2df180c645d5367e@mail.gmail.com> On Mon, Oct 5, 2009 at 1:48 PM, Sebastian Walter wrote: > In 90% of all cases when an optimization fails the user provided the > wrong gradient. > |g| = 2.644732e+07 at the termination point clearly points in that > direction. Maybe you provided -gradient instead of gradient? > That is certainly a mistake I made many times before I had a solid obj/grad checker, but I can't remember a single case since. OTOH, I've seen problems like this resulting from subtle issues in line search implementation. My objective is convex; it's basically sum(exp(dot(X,w))^2). So, obj/grad wouldn't go down by subtracting 1e-10*grad from the param. vector if it were a sign error, no? > It is also possible that you simply cannot identify your 12 parameters > with the 50 measurements you made. > Then you'll have to improve your modeling resp. get more measurements. I threw out those numbers to give a sense of the size of the problem, but I don't understand how they're as relevant as you seem to be suggesting. The function I'm trying to minimize is a least squares loss between target and predicted values, but the function doesn't change other than via the 'x' that fmin_cg manipulates. fmin_cg should still find a local min even if I have too many parameters, no? FYI, this is the sequence of iterations I'm seeing (first line is calculated before fmin_cg call; thereafter from callback). 2-norm of the gradient is the "grad" number. obj=2.710e+06 grad=9.200e+04 obj=1.928345e+06 grad=7.290656e+05 obj=2.483969e+06 grad=2.642844e+07 obj=2.484517e+06 grad=2.644731e+07 obj=2.484517e+06 grad=2.644732e+07 obj=2.484517e+06 grad=2.644732e+07 obj=2.484517e+06 grad=2.644732e+07 obj=2.484517e+06 grad=2.644732e+07 Jason -- Jason Rennie Research Scientist, ITA Software 617-714-2645 http://www.itasoftware.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrennie at gmail.com Mon Oct 5 14:40:31 2009 From: jrennie at gmail.com (Jason Rennie) Date: Mon, 5 Oct 2009 14:40:31 -0400 Subject: [SciPy-User] optimize.fmin_cg terminates when w - grad*1e-10 yields lower obj & grad In-Reply-To: <1cd32cbb0910051135y8ee22eaq9900bb795f783cab@mail.gmail.com> References: <75c31b2a0910050956y4ec3154fvfa02e5faebb06c48@mail.gmail.com> <1cd32cbb0910051040l3d58edbao70cdb3006fdf984e@mail.gmail.com> <75c31b2a0910051053h6c1b383cn3ecda47bc12259cf@mail.gmail.com> <1cd32cbb0910051135y8ee22eaq9900bb795f783cab@mail.gmail.com> Message-ID: <75c31b2a0910051140r71e45f6kcd10785fb465d7b3@mail.gmail.com> On Mon, Oct 5, 2009 at 2:35 PM, wrote: > In this case, I either agree with Sebastian, or you are already at a > minimum > up to the usual precision, or your problem is badly scaled (my next > guesses). > > In your initial example, you had an improvement with a stepsize 1e-10, > however fmin_cg has a minimum stepsize of amin = 1e-8 (hardcoded) in > linesearch > if my very fast skimming of the code is correct. > Ah. This sounds like it may be the problem. When I use 1e-8, I don't get a smaller objective. Jason > > I don't know about fmin_cg but, if I remember correctly, I got this return > code > with other minimizers when I had an (almost) perfect fit, with no noise in > the > simulation. Otherwise, you could try to rescale your problem to make the > parameters larger in absolute value. > > Josef > -- Jason Rennie Research Scientist, ITA Software 617-714-2645 http://www.itasoftware.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Oct 5 14:51:27 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 5 Oct 2009 14:51:27 -0400 Subject: [SciPy-User] optimize.fmin_cg terminates when w - grad*1e-10 yields lower obj & grad In-Reply-To: <75c31b2a0910051140r71e45f6kcd10785fb465d7b3@mail.gmail.com> References: <75c31b2a0910050956y4ec3154fvfa02e5faebb06c48@mail.gmail.com> <1cd32cbb0910051040l3d58edbao70cdb3006fdf984e@mail.gmail.com> <75c31b2a0910051053h6c1b383cn3ecda47bc12259cf@mail.gmail.com> <1cd32cbb0910051135y8ee22eaq9900bb795f783cab@mail.gmail.com> <75c31b2a0910051140r71e45f6kcd10785fb465d7b3@mail.gmail.com> Message-ID: <1cd32cbb0910051151i37751db3wd9fb0f17256c7509@mail.gmail.com> On Mon, Oct 5, 2009 at 2:40 PM, Jason Rennie wrote: > On Mon, Oct 5, 2009 at 2:35 PM, wrote: >> >> In this case, I either agree with Sebastian, or you are already at a >> minimum >> up to the usual precision, or your problem is badly scaled (my next >> guesses). >> >> In your initial example, you had an improvement with a stepsize 1e-10, >> however fmin_cg has a minimum stepsize of amin = 1e-8 (hardcoded) in >> linesearch >> if my very fast skimming of the code is correct. > > Ah. ?This sounds like it may be the problem. ?When I use 1e-8, I don't get a > smaller objective. > Jason Then dividing your X or w by something large (1e5, 1e8) in the objective function should help ? Josef > >> >> I don't know about fmin_cg but, if I remember correctly, I got this return >> code >> ?with other minimizers when I had an (almost) perfect fit, with no noise >> in the >> simulation. Otherwise, you could try to rescale your problem to make the >> parameters larger in absolute value. >> >> Josef > > -- > Jason Rennie > Research Scientist, ITA Software > 617-714-2645 > http://www.itasoftware.com/ > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From ckoers at telenet.be Mon Oct 5 15:11:00 2009 From: ckoers at telenet.be (Cesar Koers) Date: Mon, 05 Oct 2009 21:11:00 +0200 Subject: [SciPy-User] Simulations In-Reply-To: References: Message-ID: <4ACA44C4.5090003@telenet.be> Hi Ram, I quickly read through your intro doc, I think you've explained your idea quite well. One remarks though: I think your framework would fit well to time-domain (transient) models. But at this moment I don't see how you could cast a frequency domain simulation (commonly used in EM solvers) in it. I'd be careful with the idea that 'all simulations' fit into this. What I think is key to success of this kind of framework is how well it handles the 'bureaucracy' of performing simulations (and speed, but you've already mentioned that the actual number crunching is up to the user of the GarlicSim). With this, I mean the boring stuff, like e.g.: * keeping track of which parameters vary between simulations * extracting data from a set of simulations as a function of one of these parameters * storing (and backing up) simulation results without taking up too much space and needing to invent unique and descriptive file names * being able to redo a simulation (storing simulation parameters with results) * making simulation reports * comparing results with real-world data * for long simulations, being able to continue simulation after a crash Just my 2 cents Best regards C cool-RR wrote: > Hello, > > This is not directly related to SciPy; I'm posting it here because I > figure that there may be people here who know the scientific computing > world enough to help me with my question. > > I've been working on an open-source scientific computing project for > about 6 months now, and I've come to the conclusion that it's about time > to find other users except myself for it, so I may get valuable feedback > about which direction I should be taking this project. > > The project is called GarlicSim (http://garlicsim.com > ). It's a Pythonic platform for working with > simulations. You may read more about it on the webpage. In short, it's a > very general framework for creating, running and analyzing simulations. > It's not specific to any scientific field; Its role is to provide a > general mold into which all simulations can be cast. If you want to know > more about it you can also read a (yet-incomplete) introduction > to > it. > > So what I want to know is, who would be good potential first users for > this, and how could I reach them? > I'm not even sure which scientific field I would like to target, so > please suggest. > > > Thanks, > Ram Rachum > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Gaetan Cesar Koers Kerkveldweg 82 1851 Humbeek +32(0)486 20 11 16 From nitinchandra1 at gmail.com Mon Oct 5 15:26:29 2009 From: nitinchandra1 at gmail.com (nitin chandra) Date: Tue, 6 Oct 2009 00:56:29 +0530 Subject: [SciPy-User] NumPy error Message-ID: <965122bf0910051226p6f7ad94cx7e4e28a3dd061a5d@mail.gmail.com> Hello Everyone, I have a System Configured with P4 1.8GHz, 256 DDR RAM, 80 GB HDD PATA which has FC10 with the default kernel 2.6.27 installed. I have Installed following from source / tar.gz / gz files in /opt/ I Installed Python 2.6.2, LAPACK-3.2.1, XBLAS-1.0.248, ATLAS-3.9.14 ( Linux_P4ESSE2 ), FFTW-3.2.2, nose-0.11.1, Stuck at NumPy-1.3.0rc2 and will install Scipy next. I created various Log file duing the process of './configure' or 'make' Attached is a file with various parameters given during installation. my lapack_LINUX.a = liblapack.a = 15MB ( I have already spent 3 weeks, to this point.... desperatly need some guidance / help ... and let me know where am i going wrong ) . TIA Nitin I am getting the following error :- ERRORS [root at mi newpy]# python Python 2.6.2 (r262:71600, Sep 28 2009, 21:33:37) [GCC 4.3.2 20081105 (Red Hat 4.3.2-7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy Traceback (most recent call last): File "", line 1, in File "/opt/python262/lib/python2.6/site-packages/numpy/__init__.py", line 130, in import add_newdocs File "/opt/python262/lib/python2.6/site-packages/numpy/add_newdocs.py", line 9, in from lib import add_newdoc File "/opt/python262/lib/python2.6/site-packages/numpy/lib/__init__.py", line 13, in from polynomial import * File "/opt/python262/lib/python2.6/site-packages/numpy/lib/polynomial.py", line 18, in from numpy.linalg import eigvals, lstsq File "/opt/python262/lib/python2.6/site-packages/numpy/linalg/__init__.py", line 47, in from linalg import * File "/opt/python262/lib/python2.6/site-packages/numpy/linalg/linalg.py", line 22, in from numpy.linalg import lapack_lite ImportError: /opt/atlas/lib/liblapack.so: undefined symbol: blas_zhemv2_x_ >>> [1]+ Stopped /opt/python262/bin/python [root at mi newpy]# cat ~/.bash_profile -------------- next part -------------- =========================================================================================================== INSTALLING FFTW ;;;this is for double precision #./configure --prefix=/opt/fftw332 --enable-shared --enable-threads --enable-sse2 --enable-portable-binary #make #make install RUN THE ./configure 2nd TIME ;;;This is for single precesion #./configure --prefix=/opt/fftw332 --enable-shared --enable-threads --enable-sse --enable-portable-binary \ --enable-float #make >make.log #make install >make.install.log ============================================================================================================ XBLAS.tar.gz INSTALLAITON # tar zxvf xblas.tar.gz # cd xblas-1.0.248 # autoconf # CC=gcc FC=gfortran ./configure --prefix=/opt/xblas # m4 Makefile.m4 >Makefile # make makefiles > makefiles.log # make > make.log To UNINSTALL # make clean ============================================================================================================ LAPACK-3.2.0 INSTALLATION #tar zxvf lapack.tgz #cd lapack-3.2.1 #cp INSTALL/make.inc.gfortran make.inc >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> IMPORTANT :- INSTALL ATLAS IN A BOGUS/TEMP DIR, WHICH YOU WILL DELETE AFTER DOING THE FOLLOWING: "BOGUS INSTALL" # tar zxvf atlas3.9.14.tar.bz2 # mv ATLAS ATLAS_tmp # cd ATLAS_tmp # /home/nitin/newpy/ATLAS-3.9.14/configure -Si cputhrchk 0 -b 32 -D c -DPentiumP4=1790 \ --dylibs -Fa alg -fPIC <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< AFTER RUNNING THE ./configure (above), THIS WILL MAKE A make.inc FILE EDIT LAPACK/make.inc : COPY from ATLAS_tmp (./configure creates) make.inc SEARCH FOR THE FOLLOWING LINE AND Copy after = F77FLAGS = -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -fPIC -m32 -fPIC PASTE into LAPACK/make.inc OPT= : FORTRAN = gfortran -fimplicit-none -g OPTS = -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -fPIC -m32 -fPIC DRVOPTS = $(OPTS) NOOPT = -fomit-frame-pointer -mfpmath=387 -m32 LOADER = gfortran -g LOADOPTS= $(OPTS) further down Un-Commnet the lines and add the path: USEXBLAS = Yes XBLASLIB = /home/nitin/newpy/xblas-1.0.248/libxblas.a #XBLASLIB = -lxblas save and exit #joe Makefile (nano/pico/vi) edit all: lapack_install lib lapack_testing blas_testing save and exit # make blaslib > blaslib.log # make > make.log # cp lapack_LINUX.a liblapack.a # cp liblapack.a /home/nitin/newpy/Linux_P4ESSE2/lib/. (Overwrite? y) ;;; this liblapack.a = lapack_LINUX.a = 15MB approx (at least more than 6MB) The following is in Makefile all:lapack_install lib lapack_testing blas_testing ;;; for the time being ;;; removed 'testing' & ;;; 'timing' {{{ OR # ld -o /opt/atlas/lib/liblapack.so -shared --whole-archive\ --export-dynamic /home/nitin/newpy/lapack-3.2.1/liblapack.a }}} # rm -Rf ATLAS_tmp/ TO UNINSTALL [ LAPACK ]#rm -vfr lapack_LINUX.a blas_LINUX.a tmglib_LINUX.a lapacklib.a # make clean ========================================================================================== ATLAS INSTALLATIONS INSTRUCTIONS # tar jxvf atlas-3.9.14.tar.bz2 ;;;Rename ATLAS direcotry to ATLAS-3.9.14 # mv ATLAS ATLAS-3.9.14 ;;; Rename the directory, convineant ;;; Turn off CPU throttling when installing ATLAS , Fedora # /usr/bin/cpufreq-selector -g performance ;;; On my Core2Duo, cpufreq-selector only changes the parameters of the first CPU, ;;; regardless of which cpu you specify. I suspect this is a bug, because on earlier ;;; systems, the remaining CPUs were controlled via a logical link to ;;; /sys/devices/system/cpu/cpu0/. In this case, the only way I found to force the ;;; second processor to also run at its peak frequency was to issue the following as ;;; root after setting CPU0 to performance: cp /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor \ /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor # cd .. (one up dir) ;;; Out of ATLAS-3.9.14 dir /home/nitin/newpy # mkdir Linux_P4ESSE2 ;;; Make a new dir Or as.... Linux_C2D64SSE3 (Core2Duo) # /home/nitin/newpy/ATLAS-3.9.14/configure --with-netlib-lapack=/home/nitin/newpy/lapack-3.2.1/lapack_LINUX.a\ --dylibs -b 32 -D c -DPentiumP4=1790 --prefix=/opt/atlas -Ss flapack=/home/nitin/newpy/lapack-3.2.1/SRC\ -Fa alg -fPIC -Si cputhrchk 0 > config.log ;;; takes a good amount of an hour ...frankly depending on your machine config. # make check > check.log # make time > time.log # make install > install.log [Linux_P4ESSE2]# cd lib # make shared > shared.log # cp -f *.so /opt/atlas/lib/. # cd .. [Linux_P4ESSE2]# cd bin # make xdlutst_dyn >xdlutst.log ( export ATLAS=/usr/local/lib/atlas ) UNINSTALL [Linux_P4ESSE2]# make clean ===================================================================================== INSTALLING nose #tar zxvf nose-0.11.1.tar.tar # cd nose-0.11.1 #/opt/python262/bin/python setup.py install --prefix=/opt/python262 2>&1 | tee nose.log ===================================================================================== INSTALLING numpy # tar zxvf numpy-1.2.1.tar.gz # cd numpy-1.2.1 # cp site.cfg.example site.cfg # joe site.cfg [DEFAULT] library_dirs = /usr/local/lib:/opt/atlas/lib:/opt/fftw332/lib:/opt/python262/lib include_dirs = /usr/local/include:/opt/atlas/include:/opt/fftw332/include:/opt/python262/include [blas_opt] libraries = f77blas, cblas, atlas [lapack_opt] libraries = lapack, f77blas, cblas, atlas, g2c [fftw] libraries = fftw3, fftw3f [fftw_opt] libraries = fftw3_threads, fftw3f_threads SAVE and EXIT # /opt/python262/bin/python setup.py -v config_fc build_ext --fcompiler=gnu95 build | tee build.log # /opt/python262/bin/python setup.py install --prefix=/opt/python262 2>&1 | tee install.log # source ~/.bashrc TO UN-INSTALL numpy Remove dir 'build' Remove /opt/python262/lib/python2.6/site-packages/numpy-*.egg and Remove -rvf /opt/python262/lib/python2.6/site-packages/numpy/ ;;; numpy/ direcotry ================================================================================ INSTALLING SciPy /home/nitin/newpy/scipy-0.7.1.tar.gz # tar zxvf scipy-0.7.1.tar.gz # cd scipy-0.7.1 # XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX ERRORS [root at mi newpy]# python Python 2.6.2 (r262:71600, Sep 28 2009, 21:33:37) [GCC 4.3.2 20081105 (Red Hat 4.3.2-7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy Traceback (most recent call last): File "", line 1, in File "/opt/python262/lib/python2.6/site-packages/numpy/__init__.py", line 130, in import add_newdocs File "/opt/python262/lib/python2.6/site-packages/numpy/add_newdocs.py", line 9, in from lib import add_newdoc File "/opt/python262/lib/python2.6/site-packages/numpy/lib/__init__.py", line 13, in from polynomial import * File "/opt/python262/lib/python2.6/site-packages/numpy/lib/polynomial.py", line 18, in from numpy.linalg import eigvals, lstsq File "/opt/python262/lib/python2.6/site-packages/numpy/linalg/__init__.py", line 47, in from linalg import * File "/opt/python262/lib/python2.6/site-packages/numpy/linalg/linalg.py", line 22, in from numpy.linalg import lapack_lite ImportError: /opt/atlas/lib/liblapack.so: undefined symbol: blas_zhemv2_x_ >>> [1]+ Stopped /opt/python262/bin/python [root at mi newpy]# cat ~/.bash_profile From cmac at mit.edu Mon Oct 5 15:34:37 2009 From: cmac at mit.edu (Christopher MacMinn) Date: Mon, 5 Oct 2009 15:34:37 -0400 Subject: [SciPy-User] Problems installing Scipy on Snow Leopard Message-ID: <95da30590910051234i2acb3d25k416cd4b4458493f7@mail.gmail.com> >> I can tell you where the software is but I can't tell you how to >> install on a MacOS: > > My 2c: > Install numpy from sources. Far easier to debug that way, and you're > sure that your numpy is 64b. I suggest following the instructions at the link below for the whole thing -- they are very clear, and worked for me with no problems. http://blog.hyperjeff.net/?p=160 Best, Chris From pgmdevlist at gmail.com Mon Oct 5 15:43:45 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 5 Oct 2009 15:43:45 -0400 Subject: [SciPy-User] Problems installing Scipy on Snow Leopard In-Reply-To: <95da30590910051234i2acb3d25k416cd4b4458493f7@mail.gmail.com> References: <95da30590910051234i2acb3d25k416cd4b4458493f7@mail.gmail.com> Message-ID: On Oct 5, 2009, at 3:34 PM, Christopher MacMinn wrote: >>> I can tell you where the software is but I can't tell you how to >>> install on a MacOS: >> >> My 2c: >> Install numpy from sources. Far easier to debug that way, and you're >> sure that your numpy is 64b. > > I suggest following the instructions at the link below for the whole > thing -- they are very clear, and worked for me with no problems. Except that as already stated on this list: * messing around with your /System/Library/Framework/Python is a VERY bad idea. * sudoing your install might not be the best option. You can define a directory where to install your new sources with the --user option of `python setup.py install`. It's safer and cleaner. From jrennie at gmail.com Mon Oct 5 16:11:09 2009 From: jrennie at gmail.com (Jason Rennie) Date: Mon, 5 Oct 2009 16:11:09 -0400 Subject: [SciPy-User] optimize.fmin_cg terminates when w - grad*1e-10 yields lower obj & grad In-Reply-To: <1cd32cbb0910051151i37751db3wd9fb0f17256c7509@mail.gmail.com> References: <75c31b2a0910050956y4ec3154fvfa02e5faebb06c48@mail.gmail.com> <1cd32cbb0910051040l3d58edbao70cdb3006fdf984e@mail.gmail.com> <75c31b2a0910051053h6c1b383cn3ecda47bc12259cf@mail.gmail.com> <1cd32cbb0910051135y8ee22eaq9900bb795f783cab@mail.gmail.com> <75c31b2a0910051140r71e45f6kcd10785fb465d7b3@mail.gmail.com> <1cd32cbb0910051151i37751db3wd9fb0f17256c7509@mail.gmail.com> Message-ID: <75c31b2a0910051311w18faf9e3w271e13ee8d2f1f9f@mail.gmail.com> The bug seems to be that scipy.optimize.linesearch.line_search can return a step size which increases the objective. Later linesearches are then fubar'd b/c the (phi0-old_old_fval)/derphi0 calculation yields a negative value. Would someone mind sanity-checking this assertion? Is it possible for minpack2.dcsrch to return a step which yields a negative objective? I'm seeing it when the amin value is hit. I.e. it's returning a step size of 1e-8. Thanks, Jason -- Jason Rennie Research Scientist, ITA Software 617-714-2645 http://www.itasoftware.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Mon Oct 5 16:18:41 2009 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 05 Oct 2009 16:18:41 -0400 Subject: [SciPy-User] Simulations In-Reply-To: References: <4AC9FE0B.5000208@american.edu> Message-ID: <4ACA54A1.2050809@american.edu> >> On 10/5/2009 3:37 AM, Sebastian Haase wrote: >>> my to do list for today is to implement a discrete >>> (checker board kind-of) simulation > On 10/5/09, Alan G Isaac wrote: >> http://econpy.googlecode.com/svn/trunk/abm/gridworld/gridworld.py >> License: MIT >> >> PS Documentation is only docstrings, because it is simple, >> but I can send you examples of use. On 10/5/2009 1:46 PM, cool-RR wrote: > I think there's been some confusion here: the docstrings in GarlicSim > are standard Python strings. Only the structured introduction is a > Word document. Sorry for any confusion. gridworld.py is not related to GarlicSim in any way. It implements some classes useful for simulation on a grid, as sought by Sebastian. (It requires Python 2.6.) Alan Isaac From agile.aspect at gmail.com Mon Oct 5 16:19:46 2009 From: agile.aspect at gmail.com (Agile Aspect) Date: Mon, 5 Oct 2009 13:19:46 -0700 Subject: [SciPy-User] NumPy error In-Reply-To: <965122bf0910051226p6f7ad94cx7e4e28a3dd061a5d@mail.gmail.com> References: <965122bf0910051226p6f7ad94cx7e4e28a3dd061a5d@mail.gmail.com> Message-ID: On Mon, Oct 5, 2009 at 12:26 PM, nitin chandra wrote: > Hello Everyone, > > I have a System Configured with P4 1.8GHz, 256 DDR RAM, 80 GB HDD PATA > which has FC10 with the default kernel 2.6.27 installed. > > I have Installed following from source / tar.gz / gz files in /opt/ > > I Installed Python 2.6.2, LAPACK-3.2.1, XBLAS-1.0.248, ATLAS-3.9.14 ( > Linux_P4ESSE2 ), FFTW-3.2.2, nose-0.11.1, > > Stuck at NumPy-1.3.0rc2 and will install Scipy next. I created various > Log file duing the process of './configure' or 'make' > > Attached is a file with various parameters given during installation. > > my lapack_LINUX.a = liblapack.a = 15MB > > ( I have already spent 3 weeks, to this point.... desperatly need some > guidance / help ... and let me know where am i going wrong ) . > > TIA > > Nitin > > I am getting the following error :- > > ERRORS > > > [root at mi newpy]# python > Python 2.6.2 (r262:71600, Sep 28 2009, 21:33:37) > [GCC 4.3.2 20081105 (Red Hat 4.3.2-7)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> import numpy > Traceback (most recent call last): > ?File "", line 1, in > ?File "/opt/python262/lib/python2.6/site-packages/numpy/__init__.py", > line 130, in > ? ?import add_newdocs > ?File "/opt/python262/lib/python2.6/site-packages/numpy/add_newdocs.py", > line 9, in > ? ?from lib import add_newdoc > ?File "/opt/python262/lib/python2.6/site-packages/numpy/lib/__init__.py", > line 13, in > ? ?from polynomial import * > ?File "/opt/python262/lib/python2.6/site-packages/numpy/lib/polynomial.py", > line 18, in > ? ?from numpy.linalg import eigvals, lstsq > ?File "/opt/python262/lib/python2.6/site-packages/numpy/linalg/__init__.py", > line 47, in > ? ?from linalg import * > ?File "/opt/python262/lib/python2.6/site-packages/numpy/linalg/linalg.py", > line 22, in > ? ?from numpy.linalg import lapack_lite > ImportError: /opt/atlas/lib/liblapack.so: undefined symbol: blas_zhemv2_x_ >>>> > [1]+ ?Stopped ? ? ? ? ? ? ? ? /opt/python262/bin/python > [root at mi newpy]# cat ~/.bash_profile > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > The error message was that it has a missing symbol, namely, blas_zhemv2_x_ I'm not real sure how to read this symbol but try nm libblas.a | grep _zhemv2 and see if the symbol is defined. Note, I don't have this symbol in my libraries but I'm using different versions of the libraries. Please post your site.cfg file found in /python2.6/site-packages/numpy/distutils. Is it possible you didn't add the BLAS libraries to the site.cfg file? -- "We are drowning in information and starving for knowledge." -- Rutherford D. Roger From tim.whitcomb at nrlmry.navy.mil Mon Oct 5 16:33:11 2009 From: tim.whitcomb at nrlmry.navy.mil (Whitcomb, Mr. Tim) Date: Mon, 5 Oct 2009 13:33:11 -0700 Subject: [SciPy-User] Problems installing Scipy on Snow Leopard In-Reply-To: References: <95da30590910051234i2acb3d25k416cd4b4458493f7@mail.gmail.com> Message-ID: I was able to follow the instructions there as well to get the entire SciPy stack installed, and I found the easiest way to do this by far was to easy_install virtualenv, then use a newly-created virtual environment to install all the packages - no sudo, no messing with the system Python: $ sudo easy_install virtualenv $ virtualenv --no-site-packages scipy $ . scipy/bin/activate (scipy) $ <<> > -----Original Message----- > From: scipy-user-bounces at scipy.org > [mailto:scipy-user-bounces at scipy.org] On Behalf Of Pierre GM > Sent: Monday, October 05, 2009 12:44 > To: SciPy Users List > Subject: Re: [SciPy-User] Problems installing Scipy on Snow Leopard > > > On Oct 5, 2009, at 3:34 PM, Christopher MacMinn wrote: > > >>> I can tell you where the software is but I can't tell you how to > >>> install on a MacOS: > >> > >> My 2c: > >> Install numpy from sources. Far easier to debug that way, > and you're > >> sure that your numpy is 64b. > > > > I suggest following the instructions at the link below for > the whole > > thing -- they are very clear, and worked for me with no problems. > > Except that as already stated on this list: > * messing around with your /System/Library/Framework/Python > is a VERY bad idea. > * sudoing your install might not be the best option. You can > define a directory where to install your new sources with the > --user option of `python setup.py install`. It's safer and cleaner. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Mon Oct 5 16:33:40 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 5 Oct 2009 16:33:40 -0400 Subject: [SciPy-User] optimize.fmin_cg terminates when w - grad*1e-10 yields lower obj & grad In-Reply-To: <75c31b2a0910051311w18faf9e3w271e13ee8d2f1f9f@mail.gmail.com> References: <75c31b2a0910050956y4ec3154fvfa02e5faebb06c48@mail.gmail.com> <1cd32cbb0910051040l3d58edbao70cdb3006fdf984e@mail.gmail.com> <75c31b2a0910051053h6c1b383cn3ecda47bc12259cf@mail.gmail.com> <1cd32cbb0910051135y8ee22eaq9900bb795f783cab@mail.gmail.com> <75c31b2a0910051140r71e45f6kcd10785fb465d7b3@mail.gmail.com> <1cd32cbb0910051151i37751db3wd9fb0f17256c7509@mail.gmail.com> <75c31b2a0910051311w18faf9e3w271e13ee8d2f1f9f@mail.gmail.com> Message-ID: <1cd32cbb0910051333s428c5a53n946eea24a93f3d29@mail.gmail.com> On Mon, Oct 5, 2009 at 4:11 PM, Jason Rennie wrote: > The bug seems to be that scipy.optimize.linesearch.line_search can return a > step size which increases the objective. ?Later linesearches are then > fubar'd b/c the?(phi0-old_old_fval)/derphi0 calculation yields a negative > value. > Would someone mind sanity-checking this assertion? ?Is it possible > for?minpack2.dcsrch to return a step which yields a negative objective? ?I'm > seeing it when the amin value is hit. ?I.e. it's returning a step size of > 1e-8. > Thanks, > Jason bug candidate: linesearch doesn't honor warn Has fortran 0 or 1 based indexing? I think task[1:4] == 'WARN': should instead be task[:4] == 'WARN': Josef > > -- > Jason Rennie > Research Scientist, ITA Software > 617-714-2645 > http://www.itasoftware.com/ > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From jrennie at gmail.com Mon Oct 5 16:55:38 2009 From: jrennie at gmail.com (Jason Rennie) Date: Mon, 5 Oct 2009 16:55:38 -0400 Subject: [SciPy-User] optimize.fmin_cg terminates when w - grad*1e-10 yields lower obj & grad In-Reply-To: <1cd32cbb0910051333s428c5a53n946eea24a93f3d29@mail.gmail.com> References: <75c31b2a0910050956y4ec3154fvfa02e5faebb06c48@mail.gmail.com> <1cd32cbb0910051040l3d58edbao70cdb3006fdf984e@mail.gmail.com> <75c31b2a0910051053h6c1b383cn3ecda47bc12259cf@mail.gmail.com> <1cd32cbb0910051135y8ee22eaq9900bb795f783cab@mail.gmail.com> <75c31b2a0910051140r71e45f6kcd10785fb465d7b3@mail.gmail.com> <1cd32cbb0910051151i37751db3wd9fb0f17256c7509@mail.gmail.com> <75c31b2a0910051311w18faf9e3w271e13ee8d2f1f9f@mail.gmail.com> <1cd32cbb0910051333s428c5a53n946eea24a93f3d29@mail.gmail.com> Message-ID: <75c31b2a0910051355x70018576l7906f56f72faec02@mail.gmail.com> Looks like optimize.zoom() is also buggy in that it will return a step size corresponding to an increased objective if it can't find a step in maxiter iterations. Jason On Mon, Oct 5, 2009 at 4:33 PM, wrote: > On Mon, Oct 5, 2009 at 4:11 PM, Jason Rennie wrote: > > The bug seems to be that scipy.optimize.linesearch.line_search can return > a > > step size which increases the objective. Later linesearches are then > > fubar'd b/c the (phi0-old_old_fval)/derphi0 calculation yields a negative > > value. > > Would someone mind sanity-checking this assertion? Is it possible > > for minpack2.dcsrch to return a step which yields a negative objective? > I'm > > seeing it when the amin value is hit. I.e. it's returning a step size of > > 1e-8. > > Thanks, > > Jason > > bug candidate: linesearch doesn't honor warn > > Has fortran 0 or 1 based indexing? > > I think task[1:4] == 'WARN': > should instead be > > task[:4] == 'WARN': > > > Josef > > > > > > > -- > > Jason Rennie > > Research Scientist, ITA Software > > 617-714-2645 > > http://www.itasoftware.com/ > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Jason Rennie Research Scientist, ITA Software 617-714-2645 http://www.itasoftware.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrennie at gmail.com Mon Oct 5 17:51:35 2009 From: jrennie at gmail.com (Jason Rennie) Date: Mon, 5 Oct 2009 17:51:35 -0400 Subject: [SciPy-User] optimize.fmin_cg terminates when w - grad*1e-10 yields lower obj & grad In-Reply-To: <75c31b2a0910051331j50ea1e6amddd94cbc1097808e@mail.gmail.com> References: <75c31b2a0910050956y4ec3154fvfa02e5faebb06c48@mail.gmail.com> <1cd32cbb0910051040l3d58edbao70cdb3006fdf984e@mail.gmail.com> <75c31b2a0910051053h6c1b383cn3ecda47bc12259cf@mail.gmail.com> <1cd32cbb0910051135y8ee22eaq9900bb795f783cab@mail.gmail.com> <75c31b2a0910051140r71e45f6kcd10785fb465d7b3@mail.gmail.com> <1cd32cbb0910051151i37751db3wd9fb0f17256c7509@mail.gmail.com> <75c31b2a0910051311w18faf9e3w271e13ee8d2f1f9f@mail.gmail.com> <75c31b2a0910051331j50ea1e6amddd94cbc1097808e@mail.gmail.com> Message-ID: <75c31b2a0910051451o7ff364b3nf7b6824d088fe63@mail.gmail.com> I created a trac ticket for this: http://projects.scipy.org/scipy/ticket/1012 Jason On Mon, Oct 5, 2009 at 4:31 PM, Jason Rennie wrote: > Here's one fix (I'm still seeing problems, but this definitely improves the > situation): > $ diff linesearch.py > /usr/lib/python2.6/dist-packages/scipy/optimize/linesearch.py > 52c52 > < if task[:5] == 'ERROR' or task[:4] == 'WARN': > --- > > if task[:5] == 'ERROR' or task[1:4] == 'WARN': > > linesearch.line_search wasn't catching WARNINGs returned > by minpack2.dcsrch. > > Jason > > On Mon, Oct 5, 2009 at 4:11 PM, Jason Rennie wrote: > >> The bug seems to be that scipy.optimize.linesearch.line_search can return >> a step size which increases the objective. Later linesearches are then >> fubar'd b/c the (phi0-old_old_fval)/derphi0 calculation yields a negative >> value. >> >> Would someone mind sanity-checking this assertion? Is it possible >> for minpack2.dcsrch to return a step which yields a negative objective? I'm >> seeing it when the amin value is hit. I.e. it's returning a step size of >> 1e-8. >> >> Thanks, >> >> Jason >> >> -- >> Jason Rennie >> Research Scientist, ITA Software >> 617-714-2645 >> http://www.itasoftware.com/ >> >> > > > -- > Jason Rennie > Research Scientist, ITA Software > 617-714-2645 > http://www.itasoftware.com/ > > -- Jason Rennie Research Scientist, ITA Software 617-714-2645 http://www.itasoftware.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrennie at gmail.com Mon Oct 5 18:32:02 2009 From: jrennie at gmail.com (Jason Rennie) Date: Mon, 5 Oct 2009 18:32:02 -0400 Subject: [SciPy-User] optimize.fmin_cg terminates when w - grad*1e-10 yields lower obj & grad In-Reply-To: <75c31b2a0910051355x70018576l7906f56f72faec02@mail.gmail.com> References: <75c31b2a0910050956y4ec3154fvfa02e5faebb06c48@mail.gmail.com> <1cd32cbb0910051040l3d58edbao70cdb3006fdf984e@mail.gmail.com> <75c31b2a0910051053h6c1b383cn3ecda47bc12259cf@mail.gmail.com> <1cd32cbb0910051135y8ee22eaq9900bb795f783cab@mail.gmail.com> <75c31b2a0910051140r71e45f6kcd10785fb465d7b3@mail.gmail.com> <1cd32cbb0910051151i37751db3wd9fb0f17256c7509@mail.gmail.com> <75c31b2a0910051311w18faf9e3w271e13ee8d2f1f9f@mail.gmail.com> <1cd32cbb0910051333s428c5a53n946eea24a93f3d29@mail.gmail.com> <75c31b2a0910051355x70018576l7906f56f72faec02@mail.gmail.com> Message-ID: <75c31b2a0910051532s4f819c38g206d699b7dafff79@mail.gmail.com> Setting amin in linesearch.line_search() to a smaller value (I tried 1e-12) seems to be a hack workaround for the zoom() issue (since it almost never falls-through to the (old?) optimize.line_search() method). But, I'm wondering: has Johnathan Shewchuk's "Preconditioned Nonlinear Conjugate Gradients with Secant and Polak-Ribiere" (pg. 53/59 of his tutorial) been considered? I used this as the basis for a CG Matlab implementation for my thesis work and it handled a very large problem (million+ params) nicely once I worked out the numerical issues (it's amazing how much work is involved in turning such detailed psuedocode into a solid implementation!). Link to the paper: http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf Jason On Mon, Oct 5, 2009 at 4:55 PM, Jason Rennie wrote: > Looks like optimize.zoom() is also buggy in that it will return a step size > corresponding to an increased objective if it can't find a step in maxiter > iterations. > Jason > > > On Mon, Oct 5, 2009 at 4:33 PM, wrote: > >> On Mon, Oct 5, 2009 at 4:11 PM, Jason Rennie wrote: >> > The bug seems to be that scipy.optimize.linesearch.line_search can >> return a >> > step size which increases the objective. Later linesearches are then >> > fubar'd b/c the (phi0-old_old_fval)/derphi0 calculation yields a >> negative >> > value. >> > Would someone mind sanity-checking this assertion? Is it possible >> > for minpack2.dcsrch to return a step which yields a negative objective? >> I'm >> > seeing it when the amin value is hit. I.e. it's returning a step size >> of >> > 1e-8. >> > Thanks, >> > Jason >> >> bug candidate: linesearch doesn't honor warn >> >> Has fortran 0 or 1 based indexing? >> >> I think task[1:4] == 'WARN': >> should instead be >> >> task[:4] == 'WARN': >> >> >> Josef >> >> >> >> > >> > -- >> > Jason Rennie >> > Research Scientist, ITA Software >> > 617-714-2645 >> > http://www.itasoftware.com/ >> > >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Jason Rennie > Research Scientist, ITA Software > 617-714-2645 > http://www.itasoftware.com/ > > -- Jason Rennie Research Scientist, ITA Software 617-714-2645 http://www.itasoftware.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Oct 5 21:50:20 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 5 Oct 2009 21:50:20 -0400 Subject: [SciPy-User] optimize.fmin_cg terminates when w - grad*1e-10 yields lower obj & grad In-Reply-To: <75c31b2a0910051532s4f819c38g206d699b7dafff79@mail.gmail.com> References: <75c31b2a0910050956y4ec3154fvfa02e5faebb06c48@mail.gmail.com> <1cd32cbb0910051040l3d58edbao70cdb3006fdf984e@mail.gmail.com> <75c31b2a0910051053h6c1b383cn3ecda47bc12259cf@mail.gmail.com> <1cd32cbb0910051135y8ee22eaq9900bb795f783cab@mail.gmail.com> <75c31b2a0910051140r71e45f6kcd10785fb465d7b3@mail.gmail.com> <1cd32cbb0910051151i37751db3wd9fb0f17256c7509@mail.gmail.com> <75c31b2a0910051311w18faf9e3w271e13ee8d2f1f9f@mail.gmail.com> <1cd32cbb0910051333s428c5a53n946eea24a93f3d29@mail.gmail.com> <75c31b2a0910051355x70018576l7906f56f72faec02@mail.gmail.com> <75c31b2a0910051532s4f819c38g206d699b7dafff79@mail.gmail.com> Message-ID: <1cd32cbb0910051850u13a46399ub0553459968e7e32@mail.gmail.com> On Mon, Oct 5, 2009 at 6:32 PM, Jason Rennie wrote: > Setting amin in linesearch.line_search() to a smaller value (I tried 1e-12) > seems to be a hack workaround for the zoom() issue (since it almost never > falls-through to the (old?) optimize.line_search() method). ?But, I'm > wondering: has Johnathan Shewchuk's "Preconditioned Nonlinear Conjugate > Gradients with Secant and Polak-Ribiere" (pg. 53/59 of his tutorial) been > considered? ?I used this as the basis for a CG Matlab implementation for my > thesis work and it handled a very large problem (million+ params) nicely > once I worked out the numerical issues (it's amazing how much work is > involved in turning such detailed psuedocode into a solid implementation!). > ?Link to the paper: > http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf > > Jason > > > On Mon, Oct 5, 2009 at 4:55 PM, Jason Rennie wrote: >> >> Looks like optimize.zoom() is also buggy in that it will return a step >> size corresponding to an increased objective if it can't find a step in >> maxiter iterations. Do you have a test case? What I have seen in the optimize.tests is only one case for fmin_cg, which looks similar to your case objective function log_pdot = dot(self.F, x) logZ = log(sum(exp(log_pdot))) f = logZ - dot(self.K, x) but might have well behaved parameterization. If you can write a test case that works on the limit of the current precision, we could include it in the test suite. The same optimization problem is used to test several minimizers, so this could also check whether any of the other ones is able to handle this problem. If zoom is also buggy, more work and a failing test case will be required to find and correct the bug. For your other comments, I don't know enough about fmin_cg. amin=1e-12 Could this be a problem if the numerical precision of the objective function and the gradient are not high enough? If you have a better cg algorithm or one that works better for some cases, you could propose it for inclusion in scipy. Thanks for filing the ticket. Josef >> Jason >> >> On Mon, Oct 5, 2009 at 4:33 PM, wrote: >>> >>> On Mon, Oct 5, 2009 at 4:11 PM, Jason Rennie wrote: >>> > The bug seems to be that scipy.optimize.linesearch.line_search can >>> > return a >>> > step size which increases the objective. ?Later linesearches are then >>> > fubar'd b/c the?(phi0-old_old_fval)/derphi0 calculation yields a >>> > negative >>> > value. >>> > Would someone mind sanity-checking this assertion? ?Is it possible >>> > for?minpack2.dcsrch to return a step which yields a negative objective? >>> > ?I'm >>> > seeing it when the amin value is hit. ?I.e. it's returning a step size >>> > of >>> > 1e-8. >>> > Thanks, >>> > Jason >>> >>> bug candidate: linesearch doesn't honor warn >>> >>> Has fortran 0 or 1 based indexing? >>> >>> I think task[1:4] == 'WARN': >>> should instead be >>> >>> task[:4] == 'WARN': >>> >>> >>> Josef >>> >>> >>> >>> > >>> > -- >>> > Jason Rennie >>> > Research Scientist, ITA Software >>> > 617-714-2645 >>> > http://www.itasoftware.com/ >>> > >>> > >>> > _______________________________________________ >>> > SciPy-User mailing list >>> > SciPy-User at scipy.org >>> > http://mail.scipy.org/mailman/listinfo/scipy-user >>> > >>> > >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> -- >> Jason Rennie >> Research Scientist, ITA Software >> 617-714-2645 >> http://www.itasoftware.com/ >> > > > > -- > Jason Rennie > Research Scientist, ITA Software > 617-714-2645 > http://www.itasoftware.com/ > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From nitinchandra1 at gmail.com Tue Oct 6 01:43:02 2009 From: nitinchandra1 at gmail.com (nitin chandra) Date: Tue, 6 Oct 2009 11:13:02 +0530 Subject: [SciPy-User] NumPy error In-Reply-To: References: <965122bf0910051226p6f7ad94cx7e4e28a3dd061a5d@mail.gmail.com> Message-ID: <965122bf0910052243l2f062e24jc9295e34e8262b71@mail.gmail.com> Hello There is no 'libblas.a' in the source directory. [root at mi newpy]# find /home/nitin/newpy/ -name "*blas.*" /home/nitin/newpy/ATLAS-3.9.14/doc/cblas.pdf /home/nitin/newpy/ATLAS-3.9.14/include/atlas_pkblas.h /home/nitin/newpy/ATLAS-3.9.14/include/atlas_f77blas.h /home/nitin/newpy/ATLAS-3.9.14/include/cblas.h /home/nitin/newpy/xblas-1.0.248/libxblas.a /home/nitin/newpy/xblas-1.0.248/m4/cblas.m4 /home/nitin/newpy/numpy-1.2.1/numpy/core/blasdot/_dotblas.c /home/nitin/newpy/numpy-1.2.1/numpy/core/blasdot/cblas.h /home/nitin/newpy/xblas.tar.gz /home/nitin/newpy/numpy-1.3.0rc2/build/lib.linux-i686-2.6/numpy/core/_dotblas.so /home/nitin/newpy/numpy-1.3.0rc2/build/temp.linux-i686-2.6/numpy/core/blasdot/_dotblas.o /home/nitin/newpy/numpy-1.3.0rc2/numpy/core/blasdot/_dotblas.c /home/nitin/newpy/numpy-1.3.0rc2/numpy/core/blasdot/cblas.h /home/nitin/newpy/scipy-0.7.1/scipy/lib/blas/tests/test_blas.py /home/nitin/newpy/scipy-0.7.1/scipy/lib/blas/tests/test_fblas.py /home/nitin/newpy/scipy-0.7.1/scipy/lib/blas/fblas.pyf.src /home/nitin/newpy/scipy-0.7.1/scipy/lib/blas/cblas.pyf.src /home/nitin/newpy/scipy-0.7.1/scipy/linalg/tests/test_blas.py /home/nitin/newpy/scipy-0.7.1/scipy/linalg/tests/test_fblas.py /home/nitin/newpy/scipy-0.7.1/scipy/linalg/generic_cblas.pyf /home/nitin/newpy/scipy-0.7.1/scipy/linalg/blas.py /home/nitin/newpy/scipy-0.7.1/scipy/linalg/generic_fblas.pyf /home/nitin/newpy/Linux_P4ESSE2/lib/libcblas.so /home/nitin/newpy/Linux_P4ESSE2/lib/libf77refblas.a /home/nitin/newpy/Linux_P4ESSE2/lib/libf77blas.so /home/nitin/newpy/Linux_P4ESSE2/lib/libcblas.a /home/nitin/newpy/Linux_P4ESSE2/lib/libf77blas.a Or in the /opt directory tree ... where i am installing :- [root at mi newpy]# find /opt/ -name "*blas.*" /opt/python262/lib/python2.6/site-packages/numpy/core/_dotblas.so /opt/atlas/lib/libcblas.so /opt/atlas/lib/libf77blas.so /opt/atlas/lib/libcblas.a /opt/atlas/lib/libf77blas.a /opt/atlas/include/cblas.h In my previous mail i have attached a file documenting the steps taken in installing various packages. my site.cfg file is as follows =================================================================== INSTALLING numpy # tar zxvf numpy-1.2.1.tar.gz # cd numpy-1.2.1 # cp site.cfg.example site.cfg # joe site.cfg [DEFAULT] library_dirs = /usr/local/lib:/opt/atlas/lib:/opt/fftw332/lib:/opt/python262/lib include_dirs = /usr/local/include:/opt/atlas/include:/opt/fftw332/include:/opt/python262/include [blas_opt] libraries = f77blas, cblas, atlas [lapack_opt] libraries = lapack, f77blas, cblas, atlas, g2c [fftw] libraries = fftw3, fftw3f [fftw_opt] libraries = fftw3_threads, fftw3f_threads SAVE and EXIT # /opt/python262/bin/python setup.py -v config_fc build_ext --fcompiler=gnu95 build | tee build.log # /opt/python262/bin/python setup.py install --prefix=/opt/python262 2>&1 | tee install.log # source ~/.bashrc TO UN-INSTALL numpy Remove dir 'build' Remove /opt/python262/lib/python2.6/site-packages/numpy-*.egg and Remove -rvf /opt/python262/lib/python2.6/site-packages/numpy/ ;;; numpy/ direcotry ===================================================================== > > The error message was that it has a missing symbol, namely, > > ? ? ? blas_zhemv2_x_ > > I'm not real sure how to read this symbol ?but try > > ? ? ?nm libblas.a | grep ?_zhemv2 > > and see if the symbol is defined. > > Note, I don't have this symbol in my libraries but I'm using different > versions of the libraries. > > Please post your site.cfg file found in DIRECTORY>/python2.6/site-packages/numpy/distutils. > > Is it possible you didn't add the BLAS libraries to the site.cfg file? From david at ar.media.kyoto-u.ac.jp Tue Oct 6 02:16:15 2009 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 06 Oct 2009 15:16:15 +0900 Subject: [SciPy-User] NumPy error In-Reply-To: <965122bf0910051226p6f7ad94cx7e4e28a3dd061a5d@mail.gmail.com> References: <965122bf0910051226p6f7ad94cx7e4e28a3dd061a5d@mail.gmail.com> Message-ID: <4ACAE0AF.2060001@ar.media.kyoto-u.ac.jp> Hi Nitin, nitin chandra wrote: > Hello Everyone, > > I have a System Configured with P4 1.8GHz, 256 DDR RAM, 80 GB HDD PATA > which has FC10 with the default kernel 2.6.27 installed. > > I have Installed following from source / tar.gz / gz files in /opt/ > > I Installed Python 2.6.2, LAPACK-3.2.1, XBLAS-1.0.248, ATLAS-3.9.14 ( > Linux_P4ESSE2 ), FFTW-3.2.2, nose-0.11.1, First, you don't need FFTW, scipy does not use it (anymore). I would *strongly* recommend against XBLAS and LAPACK 3.2.1. Those versions cause a lot of trouble and their features (extended precision support) is not used by numpy/scipy anyway. Use the lapack 3.1.1 (lite package is enough), and its included BLAS. Make sure you are using gfortran to compile lapack and not g77 (using the make.inc.gfortran from lapack should be enough for this). Then rebuild atlas, and run the make time and make test from atlas. Please do not use atlas 3.9.14: it is a development version, the last stable version is 3.8.3. Building atlas -------------- I assume the path is /usr/src/atlas-3.8.3, change to the right path in your case. To build atlas, you may want to use the following command, from atlas sources: mkdir /usr/src/atlas-3.8.3/MyObj && cd /usr/src/atlas-3.8.3/MyObj && ../configure -Fa alg -fPIC --with-netlib-lapack=path_to_lapack with path_to_lapack the path to your build lapack (e.g. lapack_LINUX.a or something). Make sure the path exists, atlas does not check that the path is valid, and create a bogus liblapack.a in that case: check that the file in MyObj/lib/liblapack.a has a reasonable size (several MB). Then, just do: cd /usr/src/atlas-3.8.3/MyObj && make && make test If this succeeds, you have a working atlas. Building numpy -------------- Edit your site.cfg as follows: [atlas] library_dirs = /usr/src/atlas-3.8.3/MyObj/lib And then build numpy and scipy with the python setup.py build command. It is a good idea to run the config step separately: python setup.py config To check that the correct library is detected. cheers, David From daniel.cotton at gmail.com Tue Oct 6 11:41:37 2009 From: daniel.cotton at gmail.com (Daniel Cotton) Date: Tue, 6 Oct 2009 08:41:37 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Incomplete SciPy (probably related to LAPACK), help requested. Message-ID: <25769919.post@talk.nabble.com> I am currently driving myself crazy with a Python 2.5.1 installation problem on a remote linux (scientific linux) machine administered by an admin. The machine uses gcc 3.4.5 and needs to be kept at this level for other software. Basically I get this error: >>> from scipy import interpolate Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.5/site-packages/scipy/interpolate/__init__.py", line 13, in from rbf import Rbf File "/usr/local/lib/python2.5/site-packages/scipy/interpolate/rbf.py", line 47, in from scipy import linalg File "/usr/local/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/local/lib/python2.5/site-packages/scipy/linalg/basic.py", line 24, in from scipy.linalg import calc_lwork ImportError: cannot import name calc_lwork Or this one: >>> from scipy.interpolate import interp1d Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.5/site-packages/scipy/interpolate/__init__.py", line 13, in from rbf import Rbf File "/usr/local/lib/python2.5/site-packages/scipy/interpolate/rbf.py", line 47, in from scipy import linalg File "/usr/local/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/local/lib/python2.5/site-packages/scipy/linalg/basic.py", line 389, in import decomp File "/usr/local/lib/python2.5/site-packages/scipy/linalg/decomp.py", line 23, in from blas import get_blas_funcs File "/usr/local/lib/python2.5/site-packages/scipy/linalg/blas.py", line 14, in from scipy.linalg import fblas ImportError: /usr/local/lib/python2.5/site-packages/scipy/linalg/fblas.so: undefined symbol: _gfortran_compare_string when I try and import SciPy's interpolate function. Thinking the problem related to the BLAS installation used by python (where the following two links detail the issue), http://www.scipy.org/FAQ#head-8371c35ef08b877875217aaac5489fc747b4aceb http://article.gmane.org/gmane.comp.python.scientific.user/12145 I had my network administraot reinstall LAPACK. He installed them to /usr/local/lapack and apparently everything went fine. At this point I still got the same errors but it seemed to me that Python wasn't looking in the right spot, so I tried reconfiguring the PYTHONPATH for a session. Adding /usr/local/lapack/lapack seemed to get rid of some of the errors but not all (I haven't been able to remember how I did this). I got the impression that python's internal modules needed to know to look in those directores for things related to lapack and just having it in the path wasn't sufficient. Some further searching revealed these set of instructions: http://math-atlas.sourceforge.net/errata.html#completelp So, I had my Network admin do that too. He said he did that and I still have the same problem. Any suggestions you can offer would be much appreciated. I need to be able to give my Network admin instructions as to what to do, and both he and I are Python novices; plus he is busy and it takes a day or two for him to get to my problem, so please be explicit and as complete as possible. The machine python is installed on is used by many people so I don't have access to anything approaching a high level, and the network admin is going to want a good reason before he does anything drastic that could affect other users. Daniel. -- View this message in context: http://www.nabble.com/Incomplete-SciPy-%28probably-related-to-LAPACK%29%2C-help-requested.-tp25769919p25769919.html Sent from the Scipy-User mailing list archive at Nabble.com. From Emmanuel.Lambert at intec.ugent.be Tue Oct 6 11:51:26 2009 From: Emmanuel.Lambert at intec.ugent.be (Emmanuel Lambert) Date: Tue, 06 Oct 2009 17:51:26 +0200 Subject: [SciPy-User] [SciPy-user] scipy.weave crashes for pure inline In-Reply-To: <1253686418.3851.125.camel@emmanuel-ubuntu> References: <25530916.post@talk.nabble.com> <1253686418.3851.125.camel@emmanuel-ubuntu> Message-ID: <1254844286.4560.3774.camel@emmanuel-ubuntu> For the record of the archives/mailinglist, I just want to mention that there is a patch avaialble for the problem described below. It can be found at : http://projects.scipy.org/scipy/ticket/855 > On Tue, 2009-09-22 at 14:47 -0700, Thomas Robitaille wrote: > > Hi, > > > > I'm using a recent svn revision of scipy (5925). After installing it I went > > to scipy/weave/examples and ran 'python array3d.py'. I get the following > > error message (below). Can other people reproduce this problem? If not, > > maybe it's some local installation issue. > > > > Thanks, > > > > Thomas > > > > --- > > > > numpy: > > [[[ 0 1 2 3] > > [ 4 5 6 7] > > [ 8 9 10 11]] > > > > [[12 13 14 15] > > [16 17 18 19] > > [20 21 22 23]]] > > Pure Inline: > > Traceback (most recent call last): > > File "array3d.py", line 105, in > > main() > > File "array3d.py", line 98, in main > > pure_inline(arr) > > File "array3d.py", line 57, in pure_inline > > weave.inline(code, ['arr']) > > File > > "/Users/tom/Library/Python/2.6/site-packages/scipy/weave/inline_tools.py", > > line 324, in inline > > results = attempt_function_call(code,local_dict,global_dict) > > File > > "/Users/tom/Library/Python/2.6/site-packages/scipy/weave/inline_tools.py", > > line 392, in attempt_function_call > > function_list = function_catalog.get_functions(code,module_dir) > > File "/Users/tom/Library/Python/2.6/site-packages/scipy/weave/catalog.py", > > line 615, in get_functions > > function_list = self.get_cataloged_functions(code) > > File "/Users/tom/Library/Python/2.6/site-packages/scipy/weave/catalog.py", > > line 529, in get_cataloged_functions > > if cat is not None and code in cat: > > File > > "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/shelve.py", > > line 110, in __contains__ > > return key in self.dict > > File > > "/Users/tom/Library/Python/2.6/site-packages/scipy/io/dumbdbm_patched.py", > > line 73, in __getitem__ > > pos, siz = self._index[key] # may raise KeyError > > KeyError: 0 > > > ------------------------------------------------------------------- > > On Mon, 2009-09-21 at 16:46 +0200, Emmanuel Lambert wrote: > > Hi, > > > > I compiled SciPy and Numpy on a machine with Scientific Linux. > > > > We detected a problem with Weave and after > > investigation, it turns out that some of the unit tests delivered > > with Scipy-Weave also fail ! Below is a list of tests that fail in for > > example the "test_c_spec" file. They all raise a KeyError. > > > > This is with SciPy 0.7.1 on Python 2.6. I also downloaded the latest > > Weave code again from the SVN repository, but the problem is not > > resolved. > > > > Any idea on how to tackle this problem? There are no posts that help > > me further. I don't have this problem with the same standard scipy > > package that was is available for Ubuntu 9.04 (apparently the weave > > version number is the same). > > > > It looks like the compilation works fine, see sample stdout also > > below. > > > > What could cause this? > > > > thanks for any help. > > Emmanuel > > > > ******************* SAMPLE OF STDOUT ****************** > > > > -------------------- >> begin captured stdout << --------------------- > > > > running build_ext > > running build_src > > building extension "sc_d133102ab45193e072f8dbb5a1f6848513" sources > > customize UnixCCompiler > > customize UnixCCompiler using build_ext > > customize UnixCCompiler > > customize UnixCCompiler using build_ext > > building 'sc_d133102ab45193e072f8dbb5a1f6848513' extension > > compiling C++ sources > > C compiler: g++ -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 - > > Wall -fPIC > > > > compile options: '-I/user/home/gent/vsc401/vsc40157/scipy-runtime/ > > scipy/weave -I/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/ > > weave/scxx -I/user/home/gent/vsc401/vsc40157/numpy-runtime/numpy/core/ > > include -I/apps/gent/gengar/harpertown/software/Python/2.6.2- > > gimkl-0.5.0/include/python2.6 -c' > > g++: /user/home/gent/vsc401/vsc40157/.python26_compiled/ > > sc_d133102ab45193e072f8dbb5a1f6848513.cpp > > g++ -pthread -shared /tmp/vsc40157/python26_intermediate/ > > compiler_c1b5f1b73f1ce7d0c836cdad4c7c5ded/user/home/gent/vsc401/ > > vsc40157/.python26_compiled/sc_d133102ab45193e072f8dbb5a1f6848513.o / > > tmp/vsc40157/python26_intermediate/ > > compiler_c1b5f1b73f1ce7d0c836cdad4c7c5ded/user/home/gent/vsc401/ > > vsc40157/scipy-runtime/scipy/weave/scxx/weave_imp.o -o /user/home/gent/ > > vsc401/vsc40157/.python26_compiled/ > > sc_d133102ab45193e072f8dbb5a1f6848513.so > > running scons > > > > --------------------- >> end captured stdout << ---------------------- > > > > > > ********************** TESTS THAT FAIL *********************** > > > > -bash-3.2$ python ./test_c_spec.py > > E..........EE.................EEEE......E..........EE.................EEEE.............. > > ====================================================================== > > ERROR: test_call_function (test_c_spec.CallableConverter) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > tests/test_c_spec.py", line 296, in test_call_function > > compiler=self.compiler,force=1) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > inline_tools.py", line 301, in inline > > function_catalog.add_function(code,func,module_dir) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 648, in add_function > > self.cache[code] = self.get_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 615, in get_functions > > function_list = self.get_cataloged_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 529, in get_cataloged_functions > > if cat is not None and code in cat: > > File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ > > lib/python2.6/shelve.py", line 110, in __contains__ > > return key in self.dict > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ > > dumbdbm_patched.py", line 73, in __getitem__ > > pos, siz = self._index[key] # may raise KeyError > > KeyError: 0 > > > > ====================================================================== > > ERROR: test_file_to_py (test_c_spec.FileConverter) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > tests/test_c_spec.py", line 262, in test_file_to_py > > force=1) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > inline_tools.py", line 301, in inline > > function_catalog.add_function(code,func,module_dir) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 648, in add_function > > self.cache[code] = self.get_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 615, in get_functions > > function_list = self.get_cataloged_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 529, in get_cataloged_functions > > if cat is not None and code in cat: > > File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ > > lib/python2.6/shelve.py", line 110, in __contains__ > > return key in self.dict > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ > > dumbdbm_patched.py", line 73, in __getitem__ > > pos, siz = self._index[key] # may raise KeyError > > KeyError: 0 > > > > ====================================================================== > > ERROR: test_py_to_file (test_c_spec.FileConverter) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > tests/test_c_spec.py", line 246, in test_py_to_file > > inline_tools.inline(code,['file'],compiler=self.compiler,force=1) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > inline_tools.py", line 301, in inline > > function_catalog.add_function(code,func,module_dir) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 648, in add_function > > self.cache[code] = self.get_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 615, in get_functions > > function_list = self.get_cataloged_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 529, in get_cataloged_functions > > if cat is not None and code in cat: > > File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ > > lib/python2.6/shelve.py", line 110, in __contains__ > > return key in self.dict > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ > > dumbdbm_patched.py", line 73, in __getitem__ > > pos, siz = self._index[key] # may raise KeyError > > KeyError: 0 > > > > ====================================================================== > > ERROR: test_convert_to_dict (test_c_spec.SequenceConverter) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > tests/test_c_spec.py", line 305, in test_convert_to_dict > > inline_tools.inline("",['d'],compiler=self.compiler,force=1) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > inline_tools.py", line 301, in inline > > function_catalog.add_function(code,func,module_dir) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 648, in add_function > > self.cache[code] = self.get_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 615, in get_functions > > function_list = self.get_cataloged_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 529, in get_cataloged_functions > > if cat is not None and code in cat: > > File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ > > lib/python2.6/shelve.py", line 110, in __contains__ > > return key in self.dict > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ > > dumbdbm_patched.py", line 73, in __getitem__ > > pos, siz = self._index[key] # may raise KeyError > > KeyError: 0 > > > > ====================================================================== > > ERROR: test_convert_to_list (test_c_spec.SequenceConverter) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > tests/test_c_spec.py", line 309, in test_convert_to_list > > inline_tools.inline("",['l'],compiler=self.compiler,force=1) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > inline_tools.py", line 301, in inline > > function_catalog.add_function(code,func,module_dir) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 648, in add_function > > self.cache[code] = self.get_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 615, in get_functions > > function_list = self.get_cataloged_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 529, in get_cataloged_functions > > if cat is not None and code in cat: > > File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ > > lib/python2.6/shelve.py", line 110, in __contains__ > > return key in self.dict > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ > > dumbdbm_patched.py", line 73, in __getitem__ > > pos, siz = self._index[key] # may raise KeyError > > KeyError: 0 > > > > ====================================================================== > > ERROR: test_convert_to_string (test_c_spec.SequenceConverter) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > tests/test_c_spec.py", line 313, in test_convert_to_string > > inline_tools.inline("",['s'],compiler=self.compiler,force=1) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > inline_tools.py", line 301, in inline > > function_catalog.add_function(code,func,module_dir) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 648, in add_function > > self.cache[code] = self.get_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 615, in get_functions > > function_list = self.get_cataloged_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 529, in get_cataloged_functions > > if cat is not None and code in cat: > > File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ > > lib/python2.6/shelve.py", line 110, in __contains__ > > return key in self.dict > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ > > dumbdbm_patched.py", line 73, in __getitem__ > > pos, siz = self._index[key] # may raise KeyError > > KeyError: 0 > > > > ====================================================================== > > ERROR: test_convert_to_tuple (test_c_spec.SequenceConverter) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > tests/test_c_spec.py", line 317, in test_convert_to_tuple > > inline_tools.inline("",['t'],compiler=self.compiler,force=1) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > inline_tools.py", line 301, in inline > > function_catalog.add_function(code,func,module_dir) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 648, in add_function > > self.cache[code] = self.get_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 615, in get_functions > > function_list = self.get_cataloged_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 529, in get_cataloged_functions > > if cat is not None and code in cat: > > File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ > > lib/python2.6/shelve.py", line 110, in __contains__ > > return key in self.dict > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ > > dumbdbm_patched.py", line 73, in __getitem__ > > pos, siz = self._index[key] # may raise KeyError > > KeyError: 0 > > > > ====================================================================== > > ERROR: test_call_function (test_c_spec.TestCallableConverterUnix) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > tests/test_c_spec.py", line 296, in test_call_function > > compiler=self.compiler,force=1) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > inline_tools.py", line 301, in inline > > function_catalog.add_function(code,func,module_dir) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 648, in add_function > > self.cache[code] = self.get_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 615, in get_functions > > function_list = self.get_cataloged_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 529, in get_cataloged_functions > > if cat is not None and code in cat: > > File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ > > lib/python2.6/shelve.py", line 110, in __contains__ > > return key in self.dict > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ > > dumbdbm_patched.py", line 73, in __getitem__ > > pos, siz = self._index[key] # may raise KeyError > > KeyError: 0 > > > > ====================================================================== > > ERROR: test_file_to_py (test_c_spec.TestFileConverterUnix) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > tests/test_c_spec.py", line 262, in test_file_to_py > > force=1) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > inline_tools.py", line 301, in inline > > function_catalog.add_function(code,func,module_dir) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 648, in add_function > > self.cache[code] = self.get_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 615, in get_functions > > function_list = self.get_cataloged_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 529, in get_cataloged_functions > > if cat is not None and code in cat: > > File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ > > lib/python2.6/shelve.py", line 110, in __contains__ > > return key in self.dict > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ > > dumbdbm_patched.py", line 73, in __getitem__ > > pos, siz = self._index[key] # may raise KeyError > > KeyError: 0 > > > > ====================================================================== > > ERROR: test_py_to_file (test_c_spec.TestFileConverterUnix) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > tests/test_c_spec.py", line 246, in test_py_to_file > > inline_tools.inline(code,['file'],compiler=self.compiler,force=1) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > inline_tools.py", line 301, in inline > > function_catalog.add_function(code,func,module_dir) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 648, in add_function > > self.cache[code] = self.get_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 615, in get_functions > > function_list = self.get_cataloged_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 529, in get_cataloged_functions > > if cat is not None and code in cat: > > File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ > > lib/python2.6/shelve.py", line 110, in __contains__ > > return key in self.dict > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ > > dumbdbm_patched.py", line 73, in __getitem__ > > pos, siz = self._index[key] # may raise KeyError > > KeyError: 0 > > > > ====================================================================== > > ERROR: test_convert_to_dict (test_c_spec.TestSequenceConverterUnix) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > tests/test_c_spec.py", line 305, in test_convert_to_dict > > inline_tools.inline("",['d'],compiler=self.compiler,force=1) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > inline_tools.py", line 301, in inline > > function_catalog.add_function(code,func,module_dir) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 648, in add_function > > self.cache[code] = self.get_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 615, in get_functions > > function_list = self.get_cataloged_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 529, in get_cataloged_functions > > if cat is not None and code in cat: > > File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ > > lib/python2.6/shelve.py", line 110, in __contains__ > > return key in self.dict > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ > > dumbdbm_patched.py", line 73, in __getitem__ > > pos, siz = self._index[key] # may raise KeyError > > KeyError: 0 > > > > ====================================================================== > > ERROR: test_convert_to_list (test_c_spec.TestSequenceConverterUnix) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > tests/test_c_spec.py", line 309, in test_convert_to_list > > inline_tools.inline("",['l'],compiler=self.compiler,force=1) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > inline_tools.py", line 301, in inline > > function_catalog.add_function(code,func,module_dir) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 648, in add_function > > self.cache[code] = self.get_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 615, in get_functions > > function_list = self.get_cataloged_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 529, in get_cataloged_functions > > if cat is not None and code in cat: > > File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ > > lib/python2.6/shelve.py", line 110, in __contains__ > > return key in self.dict > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ > > dumbdbm_patched.py", line 73, in __getitem__ > > pos, siz = self._index[key] # may raise KeyError > > KeyError: 0 > > > > ====================================================================== > > ERROR: test_convert_to_string (test_c_spec.TestSequenceConverterUnix) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > tests/test_c_spec.py", line 313, in test_convert_to_string > > inline_tools.inline("",['s'],compiler=self.compiler,force=1) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > inline_tools.py", line 301, in inline > > function_catalog.add_function(code,func,module_dir) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 648, in add_function > > self.cache[code] = self.get_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 615, in get_functions > > function_list = self.get_cataloged_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 529, in get_cataloged_functions > > if cat is not None and code in cat: > > File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ > > lib/python2.6/shelve.py", line 110, in __contains__ > > return key in self.dict > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ > > dumbdbm_patched.py", line 73, in __getitem__ > > pos, siz = self._index[key] # may raise KeyError > > KeyError: 0 > > > > ====================================================================== > > ERROR: test_convert_to_tuple (test_c_spec.TestSequenceConverterUnix) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > tests/test_c_spec.py", line 317, in test_convert_to_tuple > > inline_tools.inline("",['t'],compiler=self.compiler,force=1) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > inline_tools.py", line 301, in inline > > function_catalog.add_function(code,func,module_dir) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 648, in add_function > > self.cache[code] = self.get_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 615, in get_functions > > function_list = self.get_cataloged_functions(code) > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/weave/ > > catalog.py", line 529, in get_cataloged_functions > > if cat is not None and code in cat: > > File "/apps/gent/gengar/harpertown/software/Python/2.6.2-gimkl-0.5.0/ > > lib/python2.6/shelve.py", line 110, in __contains__ > > return key in self.dict > > File "/user/home/gent/vsc401/vsc40157/scipy-runtime/scipy/io/ > > dumbdbm_patched.py", line 73, in __getitem__ > > pos, siz = self._index[key] # may raise KeyError > > KeyError: 0 > > > > ---------------------------------------------------------------------- > > Ran 88 tests in 32.581s > > > > FAILED (errors=14) From cournape at gmail.com Tue Oct 6 11:52:57 2009 From: cournape at gmail.com (David Cournapeau) Date: Wed, 7 Oct 2009 00:52:57 +0900 Subject: [SciPy-User] [SciPy-user] Incomplete SciPy (probably related to LAPACK), help requested. In-Reply-To: <25769919.post@talk.nabble.com> References: <25769919.post@talk.nabble.com> Message-ID: <5b8d13220910060852h4c2a0f7cte846d73e28cbf40f@mail.gmail.com> On Wed, Oct 7, 2009 at 12:41 AM, Daniel Cotton wrote: > > I am currently driving myself crazy with a Python 2.5.1 installation problem > on a remote linux (scientific linux) machine administered by an admin. The > machine uses gcc 3.4.5 and needs to be kept at this level for other > software. > > Basically I get this error: > >>>> from scipy import interpolate > Traceback (most recent call last): > ?File "", line 1, in > ?File > "/usr/local/lib/python2.5/site-packages/scipy/interpolate/__init__.py", > line 13, in > ? from rbf import Rbf > ?File "/usr/local/lib/python2.5/site-packages/scipy/interpolate/rbf.py", > line 47, in > ? from scipy import linalg > ?File "/usr/local/lib/python2.5/site-packages/scipy/linalg/__init__.py", > line 8, in > ? from basic import * > ?File "/usr/local/lib/python2.5/site-packages/scipy/linalg/basic.py", > line 24, in > ? from scipy.linalg import calc_lwork > ImportError: cannot import name calc_lwork > > Or this one: > >>>> from scipy.interpolate import interp1d > Traceback (most recent call last): > ?File "", line 1, in > ?File > "/usr/local/lib/python2.5/site-packages/scipy/interpolate/__init__.py", > line 13, in > ? from rbf import Rbf > ?File "/usr/local/lib/python2.5/site-packages/scipy/interpolate/rbf.py", > line 47, in > ? from scipy import linalg > ?File "/usr/local/lib/python2.5/site-packages/scipy/linalg/__init__.py", > line 8, in > ? from basic import * > ?File "/usr/local/lib/python2.5/site-packages/scipy/linalg/basic.py", > line 389, in > ? import decomp > ?File "/usr/local/lib/python2.5/site-packages/scipy/linalg/decomp.py", > line 23, in > ? from blas import get_blas_funcs > ?File "/usr/local/lib/python2.5/site-packages/scipy/linalg/blas.py", > line 14, in > ? from scipy.linalg import fblas > ImportError: /usr/local/lib/python2.5/site-packages/scipy/linalg/fblas.so: > undefined symbol: _gfortran_compare_string Your fortran compiler are mismatched - this cannot work. If you use gcc 3.*, you should use g77 (the fortran compiler in the gcc 3.* series). You need to build every single fortran library (blas/lapack here) with g77,including atlas. I think you are making things much more complicated than they are: note that you don't need to bother your admin, you can build and install blas/lapack/atlas/numpy by yourself. For example you can install everything in $HOME/local - I do the same at work where I don't have admin priviledges. To check whether a given library uses gfortran, simple use ldd on it to see whether it links against libgfortran. If it does, it is built with gfortran. If it doesn't, most likely it is built with g77. cheers, David From agile.aspect at gmail.com Tue Oct 6 12:01:03 2009 From: agile.aspect at gmail.com (Agile Aspect) Date: Tue, 6 Oct 2009 09:01:03 -0700 Subject: [SciPy-User] [SciPy-user] Incomplete SciPy (probably related to LAPACK), help requested. In-Reply-To: <25769919.post@talk.nabble.com> References: <25769919.post@talk.nabble.com> Message-ID: On Tue, Oct 6, 2009 at 8:41 AM, Daniel Cotton wrote: > > I am currently driving myself crazy with a Python 2.5.1 installation problem > on a remote linux (scientific linux) machine administered by an admin. The > machine uses gcc 3.4.5 and needs to be kept at this level for other > software. > > Basically I get this error: > >>>> from scipy import interpolate > Traceback (most recent call last): > ?File "", line 1, in > ?File > "/usr/local/lib/python2.5/site-packages/scipy/interpolate/__init__.py", > line 13, in > ? from rbf import Rbf > ?File "/usr/local/lib/python2.5/site-packages/scipy/interpolate/rbf.py", > line 47, in > ? from scipy import linalg > ?File "/usr/local/lib/python2.5/site-packages/scipy/linalg/__init__.py", > line 8, in > ? from basic import * > ?File "/usr/local/lib/python2.5/site-packages/scipy/linalg/basic.py", > line 24, in > ? from scipy.linalg import calc_lwork > ImportError: cannot import name calc_lwork > > Or this one: Appears to be a problem with scipy.linalg - see the end of the next message. > >>>> from scipy.interpolate import interp1d > Traceback (most recent call last): > ?File "", line 1, in > ?File > "/usr/local/lib/python2.5/site-packages/scipy/interpolate/__init__.py", > line 13, in > ? from rbf import Rbf > ?File "/usr/local/lib/python2.5/site-packages/scipy/interpolate/rbf.py", > line 47, in > ? from scipy import linalg > ?File "/usr/local/lib/python2.5/site-packages/scipy/linalg/__init__.py", > line 8, in > ? from basic import * > ?File "/usr/local/lib/python2.5/site-packages/scipy/linalg/basic.py", > line 389, in > ? import decomp > ?File "/usr/local/lib/python2.5/site-packages/scipy/linalg/decomp.py", > line 23, in > ? from blas import get_blas_funcs > ?File "/usr/local/lib/python2.5/site-packages/scipy/linalg/blas.py", > line 14, in > ? from scipy.linalg import fblas > ImportError: /usr/local/lib/python2.5/site-packages/scipy/linalg/fblas.so: > undefined symbol: _gfortran_compare_string Wrong fortran compiler for fblas. Note, g77 and gfortran have different ABIs. -- "We are drowning in information and starving for knowledge." -- Rutherford D. Roger From cool-rr at cool-rr.com Tue Oct 6 03:24:08 2009 From: cool-rr at cool-rr.com (cool-RR) Date: Tue, 6 Oct 2009 09:24:08 +0200 Subject: [SciPy-User] Simulations In-Reply-To: <4ACA44C4.5090003@telenet.be> References: <4ACA44C4.5090003@telenet.be> Message-ID: Hey Cesar, Your comments are interesting. Can you explain to me a bit about frequency domain simulations? Can you give an example of a simulation simulating a real world process? I agree that GarlicSim must handle the bureaucracy well, as its job is to let the user write a simulation with as little bureaucracy as possible. P.S. I registered garlicsim.org and it is now the main domain. Ram. On Mon, Oct 5, 2009 at 9:11 PM, Cesar Koers wrote: > Hi Ram, > > > I quickly read through your intro doc, I think you've explained your > idea quite well. > > One remarks though: I think your framework would fit well to time-domain > (transient) models. But at this moment I don't see how you could cast a > frequency domain simulation (commonly used in EM solvers) in it. I'd be > careful with the idea that 'all simulations' fit into this. > > What I think is key to success of this kind of framework is how well it > handles the 'bureaucracy' of performing simulations (and speed, but > you've already mentioned that the actual number crunching is up to the > user of the GarlicSim). With this, I mean the boring stuff, like e.g.: > > * keeping track of which parameters vary between simulations > * extracting data from a set of simulations as a function of one of > these parameters > * storing (and backing up) simulation results without taking up too much > space and needing to invent unique and descriptive file names > * being able to redo a simulation (storing simulation parameters with > results) > * making simulation reports > * comparing results with real-world data > * for long simulations, being able to continue simulation after a crash > > Just my 2 cents > > Best regards > > C > > > cool-RR wrote: > > Hello, > > > > This is not directly related to SciPy; I'm posting it here because I > > figure that there may be people here who know the scientific computing > > world enough to help me with my question. > > > > I've been working on an open-source scientific computing project for > > about 6 months now, and I've come to the conclusion that it's about time > > to find other users except myself for it, so I may get valuable feedback > > about which direction I should be taking this project. > > > > The project is called GarlicSim (http://garlicsim.com > > ). It's a Pythonic platform for working with > > simulations. You may read more about it on the webpage. In short, it's a > > very general framework for creating, running and analyzing simulations. > > It's not specific to any scientific field; Its role is to provide a > > general mold into which all simulations can be cast. If you want to know > > more about it you can also read a (yet-incomplete) introduction > > > to > > it. > > > > So what I want to know is, who would be good potential first users for > > this, and how could I reach them? > > I'm not even sure which scientific field I would like to target, so > > please suggest. > > > > > > Thanks, > > Ram Rachum > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- > Gaetan Cesar Koers > Kerkveldweg 82 > 1851 Humbeek > +32(0)486 20 11 16 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Sincerely, Ram Rachum -------------- next part -------------- An HTML attachment was scrubbed... URL: From nahumoz at gmail.com Tue Oct 6 14:24:36 2009 From: nahumoz at gmail.com (Oz Nahum) Date: Tue, 6 Oct 2009 20:24:36 +0200 Subject: [SciPy-User] scipy.optimize fmin error Message-ID: <6ec71d090910061124k70bb61d0n29cc74cf6c35de0b@mail.gmail.com> Hi Guys, I'm trying to convert a matlab code to python, but I'm having too much difficulties. It's almost two days I'm struggling with the differences between the languages. My latest success is actually using the function fmin (before that I had success with leastsq). But I can't can't a reasonable result comparing to matlab or my knowledge (or early assumptions on my observations). I attach here the code. I don't understand why alpha (the dispersivity) I'm trying to find is always calculated to ~24m where in matlab I get a result of ~0.6m. Note also the difference in quality of models (the hand picked values, are the ones I got from matlab) Here's my code: from pylab import * from numpy import * from scipy.optimize import leastsq, fmin #injected mass in Kg M = 0.01; #distance between the wells r = 8.81; #[m] ## pumping rate Q = 0.0061; #[m^3/sec] ##thickness of aquifer saturated with water b=4.61; #[m]; ##uncertainty of the measurments (concentration measurments) sigma_s = 0.01; # [m] #### ## define the measurments #### t = array([1.0,300.0,600.0, 900., 1200., 1260., 1320., 1380, \ 1440, 1500, 1560, 1620, 1680, 1740, 1800, 1860, 1920, 1980, 2040, 2100, 2160, 2220, 2280, 2340,\ 2400, 2460, 2520, 2580, 2640, 2700, 2760, 2820,\ 2880, 2940, 3000, 3060, 3120, 3180, 3240, 3300,\ 3360, 3420, 3480, 3540, 3600, 3660, 3720, 3780,\ 3840, 3900, 4200, 4500, 4800, 5100, 5400, 5700,\ 6000, 6300, 6600, 6900, 7200, 7500, 7800, 8100,\ 8400, 8700, 9000, 9300, 9600, 9900, 10200, 10500,\ 10800, 11100, 11400, 12000]) t = t.transpose() c = array([0.07, 0.1, 0.11, 0.13, 1.17, 2.15, 3.65, 5.64,\ 8.12, 11, 14.3, 17.3, 20.6, 23.5, 26.5, 29.1,\ 31.5, 33.5, 35.3, 36.8, 37.9, 38.8, 39.5, 39.8,\ 40.1, 40.2, 40.1, 39.9, 39.5, 39, 38.5, 37.9, \ 37.3, 36.5, 35.9, 35.1, 34.4, 33.5, 32.9, 32, \ 31.2, 30.5, 29.9, 29, 28.2, 27.5, 26.8, 26.1, \ 25.4, 24.7, 21.7, 19, 16.8, 14.8, 13.3, 12.1, \ 11, 10.1, 9.4, 8.81, 8.15, 7.71, 7.3, 6.98, \ 6.67, 6.36, 6.12, 5.92, 5.78, 5.58, 5.41, 5.15, \ 4.77, 4.54, 4.37, 4.19])-0.07*1e-9*1350 c=c.transpose() cmax = max(c) ## die index mit find(c==max(c)) kann man auch finden tmax = t[find(c==cmax)]# 2460 ; #sec cr=c/cmax # dimensionless concentration [-] crmax = max(cr) tr=t/int(tmax) # dimensionless time [-] trmax = tr[find(cr==crmax)] def residuals(alpha, tr, cr): #defintion for Radial Flow Field P = r/alpha#Peclet Number [-] #using tmax here causes an error of the optimization #it should be like the matlab version tmax=sqrt(1+P**(-2))-1/P; #see eq. 21 in Sauty, 1980, this creates a tmax(P), and P(alpha) K = tmax**0.5*exp((P/4/tmax)*(1-tmax)**2) print K A=-P/(4*tr)*(1-tr)**2#dimensionless f=K/tr**0.5*exp(A)#dimensionless #err=(f-cr) B=(f-cr) B=B**2 N=len(B) B=sum(B)/N return B def OneDmodel(alpha, r): P = r/alpha# #Peclet Number [-] tmax=sqrt(1+P**(-2))-1/P# %see eq. 21 in Sauty, 1980, this creates a tmax(P), and P(alpha) K = sqrt(tmax)*exp(P/(4*tmax)*(1-tmax)**2) print K A=-P/(4*tr)*(1-tr)**2#; %dimensionless f=K/tr**0.5*exp(A) #%dimensionless return f p0 = 3 #initial alpha value #x = arange(0,6e-2,6e-2/30) #alpha = leastsq(residuals, p0, args=(cr, tr)) alpha = fmin(residuals, 28, args=(cr,tr),maxiter=10000, maxfun=10000) #print plsq[0] #print alpha #print plsq print 'optimized dispersivity is ', alpha[0] alpha=alpha[0] oneDmodel = OneDmodel(alpha,r) ### Plot handpicked= OneDmodel(0.6307,r) plot(tr,cr, 'r+-') plot(tr, oneDmodel, 'bo-') plot(tr, handpicked, 'g--') #print cr #,x,y_meas,'o',x,y_true) legend(['Real', 'Fit', 'Hand Picked']) show() Any kind of help will be more than appreciated ! Thanks Oz Nahum Graduate Student Zentrum f?r Angewandte Geologie Universit?t T?bingen --- Imagine there's no countries it isn't hard to do Nothing to kill or die for And no religion too Imagine all the people Living life in peace From warren.weckesser at enthought.com Tue Oct 6 14:39:03 2009 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 06 Oct 2009 13:39:03 -0500 Subject: [SciPy-User] scipy.optimize fmin error In-Reply-To: <6ec71d090910061124k70bb61d0n29cc74cf6c35de0b@mail.gmail.com> References: <6ec71d090910061124k70bb61d0n29cc74cf6c35de0b@mail.gmail.com> Message-ID: <4ACB8EC7.2070009@enthought.com> In your call to fmin, you have reversed the cr and tr args. It should be args=(tr,cr) to match the definition of the residuals() function. Warren Oz Nahum wrote: > Hi Guys, > I'm trying to convert a matlab code to python, but I'm having too much > difficulties. > It's almost two days I'm struggling with the differences between the > languages. My latest success is actually using the function fmin > (before that I had success with leastsq). > But I can't can't a reasonable result comparing to matlab or my > knowledge (or early assumptions on my observations). > > I attach here the code. > > I don't understand why alpha (the dispersivity) I'm trying to find is > always calculated to ~24m where in matlab I get a result of ~0.6m. > Note also the difference in quality of models (the hand picked values, > are the ones I got from matlab) > > Here's my code: > > from pylab import * > from numpy import * > from scipy.optimize import leastsq, fmin > > #injected mass in Kg > M = 0.01; > #distance between the wells > r = 8.81; #[m] > ## pumping rate > Q = 0.0061; #[m^3/sec] > ##thickness of aquifer saturated with water > b=4.61; #[m]; > ##uncertainty of the measurments (concentration measurments) > sigma_s = 0.01; # [m] > > #### > ## define the measurments > #### > > t = array([1.0,300.0,600.0, 900., 1200., 1260., 1320., 1380, \ > 1440, 1500, 1560, 1620, 1680, 1740, 1800, 1860, > 1920, 1980, 2040, 2100, 2160, 2220, 2280, 2340,\ > 2400, 2460, 2520, 2580, 2640, 2700, 2760, 2820,\ > 2880, 2940, 3000, 3060, 3120, 3180, 3240, 3300,\ > 3360, 3420, 3480, 3540, 3600, 3660, 3720, 3780,\ > 3840, 3900, 4200, 4500, 4800, 5100, 5400, 5700,\ > 6000, 6300, 6600, 6900, 7200, 7500, 7800, 8100,\ > 8400, 8700, 9000, 9300, 9600, 9900, 10200, 10500,\ > 10800, 11100, 11400, 12000]) > > t = t.transpose() > > c = array([0.07, 0.1, 0.11, 0.13, 1.17, 2.15, 3.65, 5.64,\ > 8.12, 11, 14.3, 17.3, 20.6, 23.5, 26.5, 29.1,\ > 31.5, 33.5, 35.3, 36.8, 37.9, 38.8, 39.5, 39.8,\ > 40.1, 40.2, 40.1, 39.9, 39.5, 39, 38.5, 37.9, \ > 37.3, 36.5, 35.9, 35.1, 34.4, 33.5, 32.9, 32, \ > 31.2, 30.5, 29.9, 29, 28.2, 27.5, 26.8, 26.1, \ > 25.4, 24.7, 21.7, 19, 16.8, 14.8, 13.3, 12.1, \ > 11, 10.1, 9.4, 8.81, 8.15, 7.71, 7.3, 6.98, \ > 6.67, 6.36, 6.12, 5.92, 5.78, 5.58, 5.41, 5.15, \ > 4.77, 4.54, 4.37, 4.19])-0.07*1e-9*1350 > > c=c.transpose() > cmax = max(c) > ## die index mit find(c==max(c)) kann man auch finden > tmax = t[find(c==cmax)]# 2460 ; #sec > cr=c/cmax # dimensionless concentration [-] > crmax = max(cr) > tr=t/int(tmax) # dimensionless time [-] > trmax = tr[find(cr==crmax)] > > def residuals(alpha, tr, cr): > #defintion for Radial Flow Field > P = r/alpha#Peclet Number [-] > #using tmax here causes an error of the optimization > #it should be like the matlab version > tmax=sqrt(1+P**(-2))-1/P; > #see eq. 21 in Sauty, 1980, this creates a tmax(P), and P(alpha) > K = tmax**0.5*exp((P/4/tmax)*(1-tmax)**2) > print K > A=-P/(4*tr)*(1-tr)**2#dimensionless > f=K/tr**0.5*exp(A)#dimensionless > #err=(f-cr) > B=(f-cr) > B=B**2 > N=len(B) > B=sum(B)/N > return B > > def OneDmodel(alpha, r): > P = r/alpha# #Peclet Number [-] > tmax=sqrt(1+P**(-2))-1/P# %see eq. 21 in Sauty, 1980, this creates a > tmax(P), and P(alpha) > K = sqrt(tmax)*exp(P/(4*tmax)*(1-tmax)**2) > print K > A=-P/(4*tr)*(1-tr)**2#; %dimensionless > f=K/tr**0.5*exp(A) #%dimensionless > return f > > p0 = 3 #initial alpha value > #x = arange(0,6e-2,6e-2/30) > #alpha = leastsq(residuals, p0, args=(cr, tr)) > alpha = fmin(residuals, 28, args=(cr,tr),maxiter=10000, maxfun=10000) > #print plsq[0] > #print alpha > #print plsq > print 'optimized dispersivity is ', alpha[0] > alpha=alpha[0] > oneDmodel = OneDmodel(alpha,r) > ### Plot > handpicked= OneDmodel(0.6307,r) > plot(tr,cr, 'r+-') > plot(tr, oneDmodel, 'bo-') > plot(tr, handpicked, 'g--') > > #print cr > #,x,y_meas,'o',x,y_true) > legend(['Real', 'Fit', 'Hand Picked']) > show() > > > Any kind of help will be more than appreciated ! > Thanks > > Oz Nahum > Graduate Student > Zentrum f?r Angewandte Geologie > Universit?t T?bingen > > --- > > Imagine there's no countries > it isn't hard to do > Nothing to kill or die for > And no religion too > Imagine all the people > Living life in peace > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From warren.weckesser at enthought.com Tue Oct 6 14:58:42 2009 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 06 Oct 2009 13:58:42 -0500 Subject: [SciPy-User] scipy.optimize fmin error In-Reply-To: <4ACB8EC7.2070009@enthought.com> References: <6ec71d090910061124k70bb61d0n29cc74cf6c35de0b@mail.gmail.com> <4ACB8EC7.2070009@enthought.com> Message-ID: <4ACB9362.10309@enthought.com> Also, it looks like you are missing some parentheses in your definition of the c array. If you define c like this: c = (array([0.07, 0.1, 0.11, 0.13, 1.17, 2.15, 3.65, 5.64,\ 8.12, 11, 14.3, 17.3, 20.6, 23.5, 26.5, 29.1,\ 31.5, 33.5, 35.3, 36.8, 37.9, 38.8, 39.5, 39.8,\ 40.1, 40.2, 40.1, 39.9, 39.5, 39, 38.5, 37.9, \ 37.3, 36.5, 35.9, 35.1, 34.4, 33.5, 32.9, 32, \ 31.2, 30.5, 29.9, 29, 28.2, 27.5, 26.8, 26.1, \ 25.4, 24.7, 21.7, 19, 16.8, 14.8, 13.3, 12.1, \ 11, 10.1, 9.4, 8.81, 8.15, 7.71, 7.3, 6.98, \ 6.67, 6.36, 6.12, 5.92, 5.78, 5.58, 5.41, 5.15, \ 4.77, 4.54, 4.37, 4.19])-0.07)*1e-9*1350 (note the parentheses around array([0.07, ..., 4.19])-0.07), you get the same answer as your matlab code: 0.6307 Warren Warren Weckesser wrote: > In your call to fmin, you have reversed the cr and tr args. It should be > args=(tr,cr) to match the definition of the residuals() function. > > Warren > > Oz Nahum wrote: > >> Hi Guys, >> I'm trying to convert a matlab code to python, but I'm having too much >> difficulties. >> It's almost two days I'm struggling with the differences between the >> languages. My latest success is actually using the function fmin >> (before that I had success with leastsq). >> But I can't can't a reasonable result comparing to matlab or my >> knowledge (or early assumptions on my observations). >> >> I attach here the code. >> >> I don't understand why alpha (the dispersivity) I'm trying to find is >> always calculated to ~24m where in matlab I get a result of ~0.6m. >> Note also the difference in quality of models (the hand picked values, >> are the ones I got from matlab) >> >> Here's my code: >> >> from pylab import * >> from numpy import * >> from scipy.optimize import leastsq, fmin >> >> #injected mass in Kg >> M = 0.01; >> #distance between the wells >> r = 8.81; #[m] >> ## pumping rate >> Q = 0.0061; #[m^3/sec] >> ##thickness of aquifer saturated with water >> b=4.61; #[m]; >> ##uncertainty of the measurments (concentration measurments) >> sigma_s = 0.01; # [m] >> >> #### >> ## define the measurments >> #### >> >> t = array([1.0,300.0,600.0, 900., 1200., 1260., 1320., 1380, \ >> 1440, 1500, 1560, 1620, 1680, 1740, 1800, 1860, >> 1920, 1980, 2040, 2100, 2160, 2220, 2280, 2340,\ >> 2400, 2460, 2520, 2580, 2640, 2700, 2760, 2820,\ >> 2880, 2940, 3000, 3060, 3120, 3180, 3240, 3300,\ >> 3360, 3420, 3480, 3540, 3600, 3660, 3720, 3780,\ >> 3840, 3900, 4200, 4500, 4800, 5100, 5400, 5700,\ >> 6000, 6300, 6600, 6900, 7200, 7500, 7800, 8100,\ >> 8400, 8700, 9000, 9300, 9600, 9900, 10200, 10500,\ >> 10800, 11100, 11400, 12000]) >> >> t = t.transpose() >> >> c = array([0.07, 0.1, 0.11, 0.13, 1.17, 2.15, 3.65, 5.64,\ >> 8.12, 11, 14.3, 17.3, 20.6, 23.5, 26.5, 29.1,\ >> 31.5, 33.5, 35.3, 36.8, 37.9, 38.8, 39.5, 39.8,\ >> 40.1, 40.2, 40.1, 39.9, 39.5, 39, 38.5, 37.9, \ >> 37.3, 36.5, 35.9, 35.1, 34.4, 33.5, 32.9, 32, \ >> 31.2, 30.5, 29.9, 29, 28.2, 27.5, 26.8, 26.1, \ >> 25.4, 24.7, 21.7, 19, 16.8, 14.8, 13.3, 12.1, \ >> 11, 10.1, 9.4, 8.81, 8.15, 7.71, 7.3, 6.98, \ >> 6.67, 6.36, 6.12, 5.92, 5.78, 5.58, 5.41, 5.15, \ >> 4.77, 4.54, 4.37, 4.19])-0.07*1e-9*1350 >> >> c=c.transpose() >> cmax = max(c) >> ## die index mit find(c==max(c)) kann man auch finden >> tmax = t[find(c==cmax)]# 2460 ; #sec >> cr=c/cmax # dimensionless concentration [-] >> crmax = max(cr) >> tr=t/int(tmax) # dimensionless time [-] >> trmax = tr[find(cr==crmax)] >> >> def residuals(alpha, tr, cr): >> #defintion for Radial Flow Field >> P = r/alpha#Peclet Number [-] >> #using tmax here causes an error of the optimization >> #it should be like the matlab version >> tmax=sqrt(1+P**(-2))-1/P; >> #see eq. 21 in Sauty, 1980, this creates a tmax(P), and P(alpha) >> K = tmax**0.5*exp((P/4/tmax)*(1-tmax)**2) >> print K >> A=-P/(4*tr)*(1-tr)**2#dimensionless >> f=K/tr**0.5*exp(A)#dimensionless >> #err=(f-cr) >> B=(f-cr) >> B=B**2 >> N=len(B) >> B=sum(B)/N >> return B >> >> def OneDmodel(alpha, r): >> P = r/alpha# #Peclet Number [-] >> tmax=sqrt(1+P**(-2))-1/P# %see eq. 21 in Sauty, 1980, this creates a >> tmax(P), and P(alpha) >> K = sqrt(tmax)*exp(P/(4*tmax)*(1-tmax)**2) >> print K >> A=-P/(4*tr)*(1-tr)**2#; %dimensionless >> f=K/tr**0.5*exp(A) #%dimensionless >> return f >> >> p0 = 3 #initial alpha value >> #x = arange(0,6e-2,6e-2/30) >> #alpha = leastsq(residuals, p0, args=(cr, tr)) >> alpha = fmin(residuals, 28, args=(cr,tr),maxiter=10000, maxfun=10000) >> #print plsq[0] >> #print alpha >> #print plsq >> print 'optimized dispersivity is ', alpha[0] >> alpha=alpha[0] >> oneDmodel = OneDmodel(alpha,r) >> ### Plot >> handpicked= OneDmodel(0.6307,r) >> plot(tr,cr, 'r+-') >> plot(tr, oneDmodel, 'bo-') >> plot(tr, handpicked, 'g--') >> >> #print cr >> #,x,y_meas,'o',x,y_true) >> legend(['Real', 'Fit', 'Hand Picked']) >> show() >> >> >> Any kind of help will be more than appreciated ! >> Thanks >> >> Oz Nahum >> Graduate Student >> Zentrum f?r Angewandte Geologie >> Universit?t T?bingen >> >> --- >> >> Imagine there's no countries >> it isn't hard to do >> Nothing to kill or die for >> And no religion too >> Imagine all the people >> Living life in peace >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ralf.gommers at googlemail.com Tue Oct 6 15:15:30 2009 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 6 Oct 2009 21:15:30 +0200 Subject: [SciPy-User] a small example of scipy.ndimage.map_coordinates In-Reply-To: <7c21d153-13b0-41d9-8295-9805dc543f12@k41g2000vbt.googlegroups.com> References: <74bd6125-954d-4ae3-bef3-791a81b0fb67@y21g2000yqn.googlegroups.com> <7c21d153-13b0-41d9-8295-9805dc543f12@k41g2000vbt.googlegroups.com> Message-ID: On Mon, Oct 5, 2009 at 12:55 PM, denis wrote: > On Oct 5, 10:01 am, Ralf Gommers wrote: > > Hi, did you send this to the list because you want to add it to the docs > > (like herehttp:// > docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html) > > ? > > I was hoping for comments first: is it readable, accurate, the right > level for anybody ? > I think that it has the right level of detail for it to be added to the ndimage tutorial, maybe even for the map_coordinates docstring with minor modifications. I will reply with some more detailed comments in a minute. (Unreviewed doc is the curse of the web. > My imaginary bandwagon for BetterDoc includes better indexing / > tagging > and examples of good doc in various categories.) > One hundred percent agreed. This is what the doc wiki is about. It has a commenting system and edit history, so it works better than the mailing list for this. Also, the edits only get merged into the built documentation after review, so no worries there. I encourage you to check it out. > > How about http://advice.mechanicalkern.com ? looks to be in the right > direction. > Interesting, I had not seen that site before. I would imagine that answers from that site get integrated in the reference/user guides where appropriate. Cheers, Ralf > > cheers > -- denis > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckoers at telenet.be Tue Oct 6 15:23:02 2009 From: ckoers at telenet.be (Cesar Koers) Date: Tue, 06 Oct 2009 21:23:02 +0200 Subject: [SciPy-User] Simulations In-Reply-To: References: <4ACA44C4.5090003@telenet.be> Message-ID: <4ACB9916.10004@telenet.be> Hi Ram, A specific type of time domain solver for electromagnetics is e.g. TDFD = time domain finite difference. It is based on discretizing Maxwell equations in time & space But the Maxwell equations can also be expressed in the frequency domain (thus for every frequency instead of every time instant). This leads to FDFD = frequency domain finite difference. Other kinds of models, like based on finite elements (FE) can also be developed in the frequency domain. Perhaps you're now thinking that your 'step function' would still work in the frequency domain (response at frequency f_{i+} as a function of response at frequency f_i), but this doesn't work (to my knowledge) because it requires that the system is modeled by differential equations in the frequency domain (haven't encountered this before) Ah and some other 'bureaucracy' features: * tracking time spent / calculating time remaining till end * refining/coarsening time step to improve accuracy/reduce simulation time respectively best regards C cool-RR wrote: > Hey Cesar, > > Your comments are interesting. > > Can you explain to me a bit about frequency domain simulations? Can you > give an example of a simulation simulating a real world process? > > I agree that GarlicSim must handle the bureaucracy well, as its job is > to let the user write a simulation with as little bureaucracy as possible. > > P.S. I registered garlicsim.org and it is now the > main domain. > > Ram. > > On Mon, Oct 5, 2009 at 9:11 PM, Cesar Koers > wrote: > > Hi Ram, > > > I quickly read through your intro doc, I think you've explained your > idea quite well. > > One remarks though: I think your framework would fit well to time-domain > (transient) models. But at this moment I don't see how you could cast a > frequency domain simulation (commonly used in EM solvers) in it. I'd be > careful with the idea that 'all simulations' fit into this. > > What I think is key to success of this kind of framework is how well it > handles the 'bureaucracy' of performing simulations (and speed, but > you've already mentioned that the actual number crunching is up to the > user of the GarlicSim). With this, I mean the boring stuff, like e.g.: > > * keeping track of which parameters vary between simulations > * extracting data from a set of simulations as a function of one of > these parameters > * storing (and backing up) simulation results without taking up too much > space and needing to invent unique and descriptive file names > * being able to redo a simulation (storing simulation parameters with > results) > * making simulation reports > * comparing results with real-world data > * for long simulations, being able to continue simulation after a crash > > Just my 2 cents > > Best regards > > C > > > cool-RR wrote: > > Hello, > > > > This is not directly related to SciPy; I'm posting it here because I > > figure that there may be people here who know the scientific > computing > > world enough to help me with my question. > > > > I've been working on an open-source scientific computing project for > > about 6 months now, and I've come to the conclusion that it's > about time > > to find other users except myself for it, so I may get valuable > feedback > > about which direction I should be taking this project. > > > > The project is called GarlicSim (http://garlicsim.com > > ). It's a Pythonic platform for working with > > simulations. You may read more about it on the webpage. In short, > it's a > > very general framework for creating, running and analyzing > simulations. > > It's not specific to any scientific field; Its role is to provide a > > general mold into which all simulations can be cast. If you want > to know > > more about it you can also read a (yet-incomplete) introduction > > > > to > > it. > > > > So what I want to know is, who would be good potential first > users for > > this, and how could I reach them? > > I'm not even sure which scientific field I would like to target, so > > please suggest. > > > > > > Thanks, > > Ram Rachum > > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- > Gaetan Cesar Koers > Kerkveldweg 82 > 1851 Humbeek > +32(0)486 20 11 16 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > -- > Sincerely, > Ram Rachum > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Gaetan Cesar Koers Kerkveldweg 82 1851 Humbeek +32(0)486 20 11 16 From nahumoz at gmail.com Tue Oct 6 15:45:19 2009 From: nahumoz at gmail.com (Oz Nahum) Date: Tue, 6 Oct 2009 21:45:19 +0200 Subject: [SciPy-User] scipy.optimize fmin error Message-ID: <6ec71d090910061245h396896beq5004646e32db4651@mail.gmail.com> >In your call to fmin, you have reversed the cr and tr args. It should be >args=(tr,cr) to match the definition of the residuals() function. Thanks, That does the trick !!! That's what I call erosion, took me too much time to implement it in python, comparing to matlab, and just out of frustration I didn't see that little error ! Our university is too much Matlab ($%^!# ) centric, no one to ask around unfortunatly.... BTW, feel free to use this code (with the correction of course) as a real world example - it's a tracer test analysis made in a ground water test site here in southern Germany. Also, I felt python is much faster: With matlab optimization of the problem took: Elapsed time is 0.212064 seconds (for first run... further runs where much better) With python (first run): took 0.0125770568848 sec to run Which is a nice bonous ! Many thanks, P.S when I have some time in the weekend I'll put into my blog, which already has many numpy/scipy examples ! Oz Nahum Graduate Student Zentrum f?r Angewandte Geologie Universit?t T?bingen --- Imagine there's no countries it isn't hard to do Nothing to kill or die for And no religion too Imagine all the people Living life in peace From d.l.goldsmith at gmail.com Tue Oct 6 15:48:58 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Tue, 6 Oct 2009 12:48:58 -0700 Subject: [SciPy-User] a small example of scipy.ndimage.map_coordinates In-Reply-To: References: <74bd6125-954d-4ae3-bef3-791a81b0fb67@y21g2000yqn.googlegroups.com> <7c21d153-13b0-41d9-8295-9805dc543f12@k41g2000vbt.googlegroups.com> Message-ID: <45d1ab480910061248s5034298bg3b64d8fd55a10e8d@mail.gmail.com> Hi, Denis. I merely want to reinforce what Ralf has said, vis-a-vis our docstring editing Wiki: its very purpose is to maximize participation in *generating* docstrings (the preferred "baseline" documentation in Python), while also (hopefully) assuring the highest quality for what actually ends up in the code by regulating who gets to "review" and "approve" the docstrings. Also, there resides a link to our official docstring standard, with which all docstrings are required to comply. We heartily encourage new docstring "editors" (i.e., contributors) but, as hinted at by Ralf, rather than submit docstring contributions via the listserve, we find it much more efficient and useful for editors (and essentially require reviewers) to use the Wiki; don't worry about it being vetted before you contribute it, that's one of the functions for which the Wiki is set-up/designed. I believe Ralf provided the link to the Wiki's "Front Page" - which contains all essential introductory information and links - earlier in this thread, but if you need any help, feel free to email me (or Ralf - I think I can safely speak for him in this regard) off-list. One thing, however: we prefer discussion of docstring and Wiki issues to take place on scipy-dev, and thus encourage you to subscribe - and post this kind of material - to that list. I sincerely hope none of this discourages your participation - that would be the opposite of my intent - and wholeheartedly welcome you to our "editorial community." :-) Thanks for your efforts, David Goldsmith Technical Editor Olympia, WA, USA On Tue, Oct 6, 2009 at 12:15 PM, Ralf Gommers wrote: > > > On Mon, Oct 5, 2009 at 12:55 PM, denis wrote: > >> On Oct 5, 10:01 am, Ralf Gommers wrote: >> > Hi, did you send this to the list because you want to add it to the docs >> > (like herehttp:// >> docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html) >> > ? >> >> I was hoping for comments first: is it readable, accurate, the right >> level for anybody ? >> > > I think that it has the right level of detail for it to be added to the > ndimage tutorial, maybe even for the map_coordinates docstring with minor > modifications. I will reply with some more detailed comments in a minute. > > (Unreviewed doc is the curse of the web. >> My imaginary bandwagon for BetterDoc includes better indexing / >> tagging >> and examples of good doc in various categories.) >> > > One hundred percent agreed. This is what the doc wiki is about. It has a > commenting system and edit history, so it works better than the mailing list > for this. Also, the edits only get merged into the built documentation after > review, so no worries there. I encourage you to check it out. > >> >> How about http://advice.mechanicalkern.com ? looks to be in the right >> direction. >> > > Interesting, I had not seen that site before. I would imagine that answers > from that site get integrated in the reference/user guides where > appropriate. > > Cheers, > Ralf > > >> >> cheers >> -- denis >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Tue Oct 6 15:49:40 2009 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 6 Oct 2009 21:49:40 +0200 Subject: [SciPy-User] a small example of scipy.ndimage.map_coordinates In-Reply-To: <74bd6125-954d-4ae3-bef3-791a81b0fb67@y21g2000yqn.googlegroups.com> References: <74bd6125-954d-4ae3-bef3-791a81b0fb67@y21g2000yqn.googlegroups.com> Message-ID: First a general comment: the example code should be self-contained so the reader can execute all lines and get the correct result. Also, try to write docs as reStructuredText immediately, because that is the form in which it has to end up eventually. On Fri, Oct 2, 2009 at 6:52 PM, denis wrote: > Folks, > here is a small tutorial example of scipy.ndimage.map_coordinates: > > Say Cities is an n x 2 array of [latitide,longitude] coordinates, like > Paris = [48.9, 2.4] > Rome = [41.9, 12.5] > Greenwich = [51.5, 0] > Cities = np.array([ Paris, Rome, Greenwich ]) > > and A is a 91 x 360 array of temperatures at integer [lat,long] -- > Why 91 and not 90? A should be defined in code, and a more descriptive name would be nice. > A[0] along the equator, A[:,0] along the prime meridian through > Greenwich. > I'd write A[0] as A[0, :], more explicit. > Then > > ................................................................................ > z = scipy.ndimage.map_coordinates( A, Cities.T, order=order ) > > ................................................................................ > `order` is not defined. > > is the 3 temperatures at Paris, Rome and Greenwich -- approximately, > depending on order. > The transpose Cities.T is used because map_coordinates takes columns, > not rows. > ("RuntimeError: invalid shape for coordinate array" > may mean that you forgot the .T .) > Would leave out the RuntimeError. The sentence before is clear enough. > > If order is 0, map_coordinates rounds [lat,long] to the nearest > integers: the temperature at Paris is approximated by A[50,2]. > If 1, it does bilinear interpolation in the square with corners > A[48,2], A[48,3], A[49,2], A[49,3] for Paris. > If 2, it does quadratic interpolation over the 9 points A[48:51, 1:4]. > And so on, up to order 5; the default is order=3 (Catmull-Rom ?) > Order 1, bilinear, is much faster than 2 or 3. > > Spline interpolation is used, I would mention this explicitly. Not sure if it is Catmull-Rom. > What happens to A[51,-1] etc. west of Greenwich ? See the mode= > option. > If you ask the reader a question, why not answer it? This would annoy me. > > Of course the values in A may be arrays -- colors, sounds, anything > that can be blended or interpolated -- not just scalars. > > This needs example code, otherwise the reader will be confused. Using a 3-D instead of 2-D array will not work. > Links: > > http://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html > http://www.scipy.org/Cookbook/Interpolation > http://en.wikipedia.org/wiki/Multivariate_interpolation ff. > > For an introduction to interpolation methods, see ... NR ? > > Sure, NR is fine. > For the reverse problem of turning scattered data to a regular grid, > see > > http://matplotlib.sourceforge.net/api/mlab_api.html#matplotlib.mlab.griddata > . > Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Tue Oct 6 16:06:12 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Tue, 6 Oct 2009 13:06:12 -0700 Subject: [SciPy-User] Simulations In-Reply-To: <4ACB9916.10004@telenet.be> References: <4ACA44C4.5090003@telenet.be> <4ACB9916.10004@telenet.be> Message-ID: <45d1ab480910061306n3e9f4c54k57d63b36dcda72ba@mail.gmail.com> On Tue, Oct 6, 2009 at 12:23 PM, Cesar Koers wrote: > Hi Ram, > > Perhaps you're now thinking that your 'step function' would still work > in the frequency domain (response at frequency f_{i+} as a function of > response at frequency f_i), but this doesn't work (to my knowledge) > because it requires that the system is modeled by differential equations > in the frequency domain (haven't encountered this before) > However, due to what happens to the TD DE model when you transform it to the FD, don't the DE become AE (algebraic equations) and thus, conceivably, if equipped with the appropriate machinery under the hood, a "general simulator" could be designed with an FD "mode." Clearly Ram's module doesn't seem to support this dual mode functionality yet, but now that you've alerted him to this "necessity" for anything he'd want to bill as "general," he can opt to try to include it (or, simply re-bill what he's got as a "General Time-Domain Simulation Framework"). DG > > > Ah and some other 'bureaucracy' features: > * tracking time spent / calculating time remaining till end > * refining/coarsening time step to improve accuracy/reduce simulation > time respectively > > best regards > > C > > > cool-RR wrote: > > Hey Cesar, > > > > Your comments are interesting. > > > > Can you explain to me a bit about frequency domain simulations? Can you > > give an example of a simulation simulating a real world process? > > > > I agree that GarlicSim must handle the bureaucracy well, as its job is > > to let the user write a simulation with as little bureaucracy as > possible. > > > > P.S. I registered garlicsim.org and it is now the > > main domain. > > > > Ram. > > > > On Mon, Oct 5, 2009 at 9:11 PM, Cesar Koers > > wrote: > > > > Hi Ram, > > > > > > I quickly read through your intro doc, I think you've explained your > > idea quite well. > > > > One remarks though: I think your framework would fit well to > time-domain > > (transient) models. But at this moment I don't see how you could cast > a > > frequency domain simulation (commonly used in EM solvers) in it. I'd > be > > careful with the idea that 'all simulations' fit into this. > > > > What I think is key to success of this kind of framework is how well > it > > handles the 'bureaucracy' of performing simulations (and speed, but > > you've already mentioned that the actual number crunching is up to > the > > user of the GarlicSim). With this, I mean the boring stuff, like > e.g.: > > > > * keeping track of which parameters vary between simulations > > * extracting data from a set of simulations as a function of one of > > these parameters > > * storing (and backing up) simulation results without taking up too > much > > space and needing to invent unique and descriptive file names > > * being able to redo a simulation (storing simulation parameters with > > results) > > * making simulation reports > > * comparing results with real-world data > > * for long simulations, being able to continue simulation after a > crash > > > > Just my 2 cents > > > > Best regards > > > > C > > > > > > cool-RR wrote: > > > Hello, > > > > > > This is not directly related to SciPy; I'm posting it here because > I > > > figure that there may be people here who know the scientific > > computing > > > world enough to help me with my question. > > > > > > I've been working on an open-source scientific computing project > for > > > about 6 months now, and I've come to the conclusion that it's > > about time > > > to find other users except myself for it, so I may get valuable > > feedback > > > about which direction I should be taking this project. > > > > > > The project is called GarlicSim (http://garlicsim.com > > > ). It's a Pythonic platform for working > with > > > simulations. You may read more about it on the webpage. In short, > > it's a > > > very general framework for creating, running and analyzing > > simulations. > > > It's not specific to any scientific field; Its role is to provide > a > > > general mold into which all simulations can be cast. If you want > > to know > > > more about it you can also read a (yet-incomplete) introduction > > > > > < > http://dl.getdropbox.com/u/1927707/Introduction%20to%20GarlicSim.doc> > > to > > > it. > > > > > > So what I want to know is, who would be good potential first > > users for > > > this, and how could I reach them? > > > I'm not even sure which scientific field I would like to target, > so > > > please suggest. > > > > > > > > > Thanks, > > > Ram Rachum > > > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > _______________________________________________ > > > SciPy-User mailing list > > > SciPy-User at scipy.org > > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > > Gaetan Cesar Koers > > Kerkveldweg 82 > > 1851 Humbeek > > +32(0)486 20 11 16 > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > -- > > Sincerely, > > Ram Rachum > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- > Gaetan Cesar Koers > Kerkveldweg 82 > 1851 Humbeek > +32(0)486 20 11 16 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrennie at gmail.com Tue Oct 6 17:05:30 2009 From: jrennie at gmail.com (Jason Rennie) Date: Tue, 6 Oct 2009 17:05:30 -0400 Subject: [SciPy-User] optimize.fmin_cg terminates when w - grad*1e-10 yields lower obj & grad In-Reply-To: <1cd32cbb0910051850u13a46399ub0553459968e7e32@mail.gmail.com> References: <75c31b2a0910050956y4ec3154fvfa02e5faebb06c48@mail.gmail.com> <75c31b2a0910051053h6c1b383cn3ecda47bc12259cf@mail.gmail.com> <1cd32cbb0910051135y8ee22eaq9900bb795f783cab@mail.gmail.com> <75c31b2a0910051140r71e45f6kcd10785fb465d7b3@mail.gmail.com> <1cd32cbb0910051151i37751db3wd9fb0f17256c7509@mail.gmail.com> <75c31b2a0910051311w18faf9e3w271e13ee8d2f1f9f@mail.gmail.com> <1cd32cbb0910051333s428c5a53n946eea24a93f3d29@mail.gmail.com> <75c31b2a0910051355x70018576l7906f56f72faec02@mail.gmail.com> <75c31b2a0910051532s4f819c38g206d699b7dafff79@mail.gmail.com> <1cd32cbb0910051850u13a46399ub0553459968e7e32@mail.gmail.com> Message-ID: <75c31b2a0910061405y7c6b896fuf1f09497366e8c6@mail.gmail.com> Good idea re: the test case. Please find one attached. I am fairly certain that this provides a solid, self-contained, relatively-simple test case that breaks CG. But let me know if anyone finds otherwise and I'm happy to try to help tweak to make it a solid test cases. I'm guessing that larger values of self.n and self.d would provide additional good tests. FYI, I threw in cases for BFGS and L-BFGS and found that they break too, but I didn't investigate. Re: amin. My read of the code is that linesearch.py tells minpack2.dcsrch to give up if it can't find a satisfactory step-size >= amin. Does anyone understand the rationale behind this setting? I noticed in the svn logs that it was changed from 1e-6 to 1e-8 at some point. Jason On Mon, Oct 5, 2009 at 9:50 PM, wrote: > Do you have a test case? What I have seen in the optimize.tests is only one > case for fmin_cg, which looks similar to your case > objective function > log_pdot = dot(self.F, x) > logZ = log(sum(exp(log_pdot))) > f = logZ - dot(self.K, x) > > but might have well behaved parameterization. > > If you can write a test case that works on the limit of the current > precision, > we could include it in the test suite. The same optimization problem is > used to test several minimizers, so this could also check whether any of > the other ones is able to handle this problem. > If zoom is also buggy, more work and a failing test case will be required > to > find and correct the bug. > > For your other comments, I don't know enough about fmin_cg. > amin=1e-12 Could this be a problem if the numerical precision of > the objective function and the gradient are not high enough? > > If you have a better cg algorithm or one that works better for some > cases, you could propose it for inclusion in scipy. > > Thanks for filing the ticket. > > Josef -- Jason Rennie Research Scientist, ITA Software 617-714-2645 http://www.itasoftware.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: scipycgtest.py Type: text/x-python Size: 3302 bytes Desc: not available URL: From cycomanic at gmail.com Tue Oct 6 18:00:40 2009 From: cycomanic at gmail.com (Jochen Schroeder) Date: Wed, 7 Oct 2009 09:00:40 +1100 Subject: [SciPy-User] scipy.optimize fmin error In-Reply-To: <6ec71d090910061124k70bb61d0n29cc74cf6c35de0b@mail.gmail.com> References: <6ec71d090910061124k70bb61d0n29cc74cf6c35de0b@mail.gmail.com> Message-ID: <20091006220039.GA29097@cudos0803> Hi Oz, just some small comments. On 10/06/09 20:24, Oz Nahum wrote: > Hi Guys, > I'm trying to convert a matlab code to python, but I'm having too much > difficulties. > It's almost two days I'm struggling with the differences between the > languages. My latest success is actually using the function fmin > (before that I had success with leastsq). > But I can't can't a reasonable result comparing to matlab or my > knowledge (or early assumptions on my observations). > > I attach here the code. > > I don't understand why alpha (the dispersivity) I'm trying to find is > always calculated to ~24m where in matlab I get a result of ~0.6m. > Note also the difference in quality of models (the hand picked values, > are the ones I got from matlab) > > Here's my code: > > from pylab import * > from numpy import * > from scipy.optimize import leastsq, fmin It is usually considered better style to keep the namespace of modules, i.e. instead of from numpy import * do: import numpy as np same for pylab. > > #injected mass in Kg > M = 0.01; > #distance between the wells > r = 8.81; #[m] > ## pumping rate > Q = 0.0061; #[m^3/sec] > ##thickness of aquifer saturated with water > b=4.61; #[m]; > ##uncertainty of the measurments (concentration measurments) > sigma_s = 0.01; # [m] > > #### > ## define the measurments > #### > > t = array([1.0,300.0,600.0, 900., 1200., 1260., 1320., 1380, \ > 1440, 1500, 1560, 1620, 1680, 1740, 1800, 1860, > 1920, 1980, 2040, 2100, 2160, 2220, 2280, 2340,\ > 2400, 2460, 2520, 2580, 2640, 2700, 2760, 2820,\ > 2880, 2940, 3000, 3060, 3120, 3180, 3240, 3300,\ > 3360, 3420, 3480, 3540, 3600, 3660, 3720, 3780,\ > 3840, 3900, 4200, 4500, 4800, 5100, 5400, 5700,\ > 6000, 6300, 6600, 6900, 7200, 7500, 7800, 8100,\ > 8400, 8700, 9000, 9300, 9600, 9900, 10200, 10500,\ > 10800, 11100, 11400, 12000]) > > t = t.transpose() > > c = array([0.07, 0.1, 0.11, 0.13, 1.17, 2.15, 3.65, 5.64,\ > 8.12, 11, 14.3, 17.3, 20.6, 23.5, 26.5, 29.1,\ > 31.5, 33.5, 35.3, 36.8, 37.9, 38.8, 39.5, 39.8,\ > 40.1, 40.2, 40.1, 39.9, 39.5, 39, 38.5, 37.9, \ > 37.3, 36.5, 35.9, 35.1, 34.4, 33.5, 32.9, 32, \ > 31.2, 30.5, 29.9, 29, 28.2, 27.5, 26.8, 26.1, \ > 25.4, 24.7, 21.7, 19, 16.8, 14.8, 13.3, 12.1, \ > 11, 10.1, 9.4, 8.81, 8.15, 7.71, 7.3, 6.98, \ > 6.67, 6.36, 6.12, 5.92, 5.78, 5.58, 5.41, 5.15, \ > 4.77, 4.54, 4.37, 4.19])-0.07*1e-9*1350 > > c=c.transpose() > cmax = max(c) > ## die index mit find(c==max(c)) kann man auch finden > tmax = t[find(c==cmax)]# 2460 ; #sec > cr=c/cmax # dimensionless concentration [-] > crmax = max(cr) > tr=t/int(tmax) # dimensionless time [-] > trmax = tr[find(cr==crmax)] AFAIK generally it is not a good idea to test for equality of floating point numbers. Instead you should test that the difference is smaller than a number epsilon, i.e. find(abs(c-cmax) > def residuals(alpha, tr, cr): > #defintion for Radial Flow Field > P = r/alpha#Peclet Number [-] > #using tmax here causes an error of the optimization > #it should be like the matlab version > tmax=sqrt(1+P**(-2))-1/P; > #see eq. 21 in Sauty, 1980, this creates a tmax(P), and P(alpha) > K = tmax**0.5*exp((P/4/tmax)*(1-tmax)**2) > print K > A=-P/(4*tr)*(1-tr)**2#dimensionless > f=K/tr**0.5*exp(A)#dimensionless > #err=(f-cr) > B=(f-cr) > B=B**2 > N=len(B) > B=sum(B)/N > return B > > def OneDmodel(alpha, r): > P = r/alpha# #Peclet Number [-] > tmax=sqrt(1+P**(-2))-1/P# %see eq. 21 in Sauty, 1980, this creates a > tmax(P), and P(alpha) > K = sqrt(tmax)*exp(P/(4*tmax)*(1-tmax)**2) > print K > A=-P/(4*tr)*(1-tr)**2#; %dimensionless > f=K/tr**0.5*exp(A) #%dimensionless > return f > > p0 = 3 #initial alpha value > #x = arange(0,6e-2,6e-2/30) > #alpha = leastsq(residuals, p0, args=(cr, tr)) > alpha = fmin(residuals, 28, args=(cr,tr),maxiter=10000, maxfun=10000) > #print plsq[0] > #print alpha > #print plsq > print 'optimized dispersivity is ', alpha[0] > alpha=alpha[0] > oneDmodel = OneDmodel(alpha,r) > ### Plot > handpicked= OneDmodel(0.6307,r) > plot(tr,cr, 'r+-') > plot(tr, oneDmodel, 'bo-') > plot(tr, handpicked, 'g--') > > #print cr > #,x,y_meas,'o',x,y_true) > legend(['Real', 'Fit', 'Hand Picked']) > show() > > > Any kind of help will be more than appreciated ! > Thanks > > Oz Nahum > Graduate Student > Zentrum f?r Angewandte Geologie > Universit?t T?bingen > > --- > > Imagine there's no countries > it isn't hard to do > Nothing to kill or die for > And no religion too > Imagine all the people > Living life in peace > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From robfelty at gmail.com Tue Oct 6 19:17:42 2009 From: robfelty at gmail.com (Rob Felty) Date: Tue, 6 Oct 2009 17:17:42 -0600 Subject: [SciPy-User] problems with masked arrays Message-ID: I am trying to create an array which contains a mixture of strings, floats, and ints. However, some of the int and float values are missing. It seems that I should be able to use a masked array to do this, but I have been unable to get it to work quite right. If I specify the dtype for each column as a string, then it works. If I try to specify a column as an int, and there is a missing value. It does not work. Here is minimal example import numpy.ma as ma test = [('', '3-D', 7333, '', '', '', '', '', 'Tridi', '', '', 'GOOGLE', ''), (4, 'a', 1267005, 3, 1, "'1", '[VV]', '[eI]', '@', 7.0, 7.0, 'HML', '@')] test_mask = [(True, False, False, True, True, True, True, True, False, True, True, False, True), (False, False, False, False, False, False, False, False, False, False, False, False, False)] # this does not work, where I specify 'id' as an int celex_array = ma.array(celex, dtype=[('id',int),('orth', 'a53'), ('freq', 'i8'), ('lemmaID', 'a53'), ('phonCount', 'a53'), ('phonOrig', 'a53'), ('CV', 'a53'), ('phonSyl', 'a53'), ('XCRP', 'a53'), ('fam', 'a53'), ('dens', 'a53'), ('source', 'a53'), ('hmlSyl', 'a53')],mask=celex_mask) #this does work celex_array = ma.array(celex, dtype=[('id','a53'),('orth', 'a53'), ('freq', 'i8'), ('lemmaID', 'a53'), ('phonCount', 'a53'), ('phonOrig', 'a53'), ('CV', 'a53'), ('phonSyl', 'a53'), ('XCRP', 'a53'), ('fam', 'a53'), ('dens', 'a53'), ('source', 'a53'), ('hmlSyl', 'a53')],mask=celex_mask) Thanks in advance for any suggestions. Rob -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Tue Oct 6 19:26:49 2009 From: sccolbert at gmail.com (Chris Colbert) Date: Wed, 7 Oct 2009 01:26:49 +0200 Subject: [SciPy-User] ANN: a journal paper about F2PY has been published In-Reply-To: <4AC9CEC5.6050705@cens.ioc.ee> References: <4AC9CEC5.6050705@cens.ioc.ee> Message-ID: <7f014ea60910061626l68a52e38n5dc94119976a9066@mail.gmail.com> Congratulations brother! On Mon, Oct 5, 2009 at 12:47 PM, Pearu Peterson wrote: > > > -------- Original Message -------- > Subject: [f2py] ANN: a journal paper about F2PY has been published > Date: Mon, 05 Oct 2009 11:52:20 +0300 > From: Pearu Peterson > Reply-To: For users of the f2py program > To: For users of the f2py program > > Hi, > > A journal paper about F2PY has been published in International Journal > of Computational Science and Engineering: > > ?Peterson, P. (2009) 'F2PY: a tool for connecting Fortran and Python > ?programs', Int. J. Computational Science and Engineering. > ?Vol.4, No. 4, pp.296-305. > > So, if you would like to cite F2PY in a paper or presentation, using > this reference is recommended. > > Interscience Publishers will update their web pages with the new journal > number within few weeks. A softcopy of the article > available in my homepage: > ?http://cens.ioc.ee/~pearu/papers/IJCSE4.4_Paper_8.pdf > > Best regards, > Pearu > > _______________________________________________ > f2py-users mailing list > f2py-users at cens.ioc.ee > http://cens.ioc.ee/mailman/listinfo/f2py-users > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pgmdevlist at gmail.com Tue Oct 6 20:26:46 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 6 Oct 2009 20:26:46 -0400 Subject: [SciPy-User] problems with masked arrays In-Reply-To: References: Message-ID: <5C11CC74-F89F-45EA-9840-165216036A62@gmail.com> [Rob, it's customary to give more info than "it doesn't work": please post an error message w/ the version of numpy you're running] On Oct 6, 2009, at 7:17 PM, Rob Felty wrote: > I am trying to create an array which contains a mixture of strings, > floats, and ints. Do you create it by hand, or do you read the data from a file-like object ? If the latter, could you try genfromtxt ? This function should be able to take care of potential missing values for you. If the former, yes, you gonna run into problem, and numpy.ma wont be able to help you. See, your missing entries are '', which are interpreted as string, when you'd want some other type (eg, int for your 'id' field), and ndarray chokes on that. As numpy.ma.array calls ndarray under the hood, there's nothing it can do. Now, you should still be able to use genfromtxt. Using your test: >>> # transform the initial list of tuples into a list of strings >>> data=[";".join(str(_) for _ in t) for t in test] >>> # Call np.mafromtx >>>np.mafromtxt(StringIO.StringIO("\n".join (data)),delimiter=";",dtype=None) masked_array(data = [(--, '3-D', 7333, --, --, --, --, --, 'Tridi', --, --, 'GOOGLE', --) (4, 'a', 1267005, 3, 1, "'1", '[VV]', '[eI]', '@', 7.0, 7.0, 'HML', '@')], mask = [ (True, False, False, True, True, True, True, True, False, True, True, False, True) (False, False, False, False, False, False, False, False, False, False, False, False, False)], fill_value = (999999, 'N/A', 999999, 999999, 999999, 'N/', 'N/ A', 'N/A', 'N/A', 1e+20, 1e+20, 'N/A', 'N'), dtype = [('f0', ' I am having trouble with fitting data to an exponential curve. I have an x-y data series that I would like to fit to an exponential using least squares and have access to the covariance matrix of the result. I summarize my problem in the following example: import numpy as np import scipy as sp from scipy.optimize.minpack import curve_fit A, B = 5, 0.5 x = np.linspace(0, 5, 10) real_f = lambda x: A * np.exp(-1.0 * B * x) y = real_f(x) ynoisy = y + 0.01 * np.random.randn(len(x)) exp_f = lambda x, a, b: a * np.exp(-1.0 * b * x) # this line raises the error: # RuntimeError: Optimal parameters not found: Both # actual and predicted relative reductions in the sum of squares # are at most 0.000000 and the relative error between two # consecutive iterates is at most 0.000000 params, cov = curve_fit(exp_f, x, ynoisy) I have tried to use the minpack.leastsq function directly with similar results. I also tried taking the log and fitting to a line with no success. The results are the same using scipy 0.7.1 as well as 0.8.0.dev5953. Am I not using the curve_fit function correctly? Thanks, ~Kris -- Heisenberg went for a drive and got stopped by a traffic cop. The cop asked, "Do you know how fast you were going?" Heisenberg replied, "No, but I know where I am." -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Oct 7 02:19:10 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 7 Oct 2009 02:19:10 -0400 Subject: [SciPy-User] curve_fit and least squares In-Reply-To: <9d5f455c0910062236p48a66705j8bd55484ebb04f12@mail.gmail.com> References: <9d5f455c0910062236p48a66705j8bd55484ebb04f12@mail.gmail.com> Message-ID: <1cd32cbb0910062319o2c313f94nb436b04a6da3b4da@mail.gmail.com> On Wed, Oct 7, 2009 at 1:36 AM, Kris Maynard wrote: > I am having trouble with fitting data to an exponential curve. I have an x-y > data series that I would like to fit to an exponential using least squares > and have access to the covariance matrix of the result. I summarize my > problem in the following example: > > import numpy as np > import scipy as sp > from scipy.optimize.minpack import curve_fit > > A, B = 5, 0.5 > x = np.linspace(0, 5, 10) > real_f = lambda x: A * np.exp(-1.0 * B * x) > y = real_f(x) > ynoisy = y + 0.01 * np.random.randn(len(x)) > > exp_f = lambda x, a, b: a * np.exp(-1.0 * b * x) > > # this line raises the error: > > #? RuntimeError: Optimal parameters not found: Both > > #? actual and predicted relative reductions in the sum of squares > > #? are at most 0.000000 and the relative error between two > > #? consecutive iterates is at most 0.000000 > > params, cov = curve_fit(exp_f, x, ynoisy) this might be the same as http://projects.scipy.org/scipy/ticket/984 and http://mail.scipy.org/pipermail/scipy-user/2009-August/022090.html If I increase your noise standard deviation from 0.1 to 0.2 then I do get correct estimation results in your example. > > I have tried to use the minpack.leastsq function directly with similar > results. I also tried taking the log and fitting to a line with no success. > The results are the same using scipy 0.7.1 as well as 0.8.0.dev5953. Am I > not using the curve_fit function correctly? With minpack.leastsq error code 2 should be just a warning. If you get incorrect parameter estimates with optimize.leastsq, besides the warning, could you post the example so I can have a look. It looks like if you take logs then you would have a problem that is linear in (transformed) parameters, where you could use linear least squares if you just want a fit without the standard errors of the original parameters (constant) I hope that helps. Josef > Thanks, > ~Kris > -- > Heisenberg went for a drive and got stopped by a traffic cop. The cop asked, > "Do you know how fast you were going?" Heisenberg replied, "No, but I know > where I am." > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From nicoletti at consorzio-innova.it Wed Oct 7 03:44:21 2009 From: nicoletti at consorzio-innova.it (Marco Nicoletti) Date: Wed, 07 Oct 2009 09:44:21 +0200 Subject: [SciPy-User] *****SPAM***** SciPy-User Digest, Vol 73, Issue 54 In-Reply-To: References: Message-ID: <1254901461.7167.10.camel@nicolettiws> Dear Joe, what I want to do is to interpolate the position array adding the constrain about the first derivative (the velocity array) whiche I already have. An example: I have t = [0,2,4,6] (the definition domain), p(t) = [10,13,16,18] (the position array) and v(t) = [1.1, 0.8, 0.7, 0.4]. I have this new domain t1 = [0,1,2,3,4,5,6] and I want to obtain p1(t1) imposing the constrain about the velocities v1(t) = v(t). I want the spline interpolation. Any ideas? Thanks in advanced for your advices. Marco Nicoletti Il giorno mer, 30/09/2009 alle 17.15 -0500, scipy-user-request at scipy.org ha scritto: > Send SciPy-User mailing list submissions to > scipy-user at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.scipy.org/mailman/listinfo/scipy-user > or, via email, send a message with subject or body 'help' to > scipy-user-request at scipy.org > > You can reach the person managing the list at > scipy-user-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-User digest..." > > > Today's Topics: > > 1. Re: Forced derivative interpolation?? (Joe Kington) > 2. memoryError when i have plenty of available ram (Gustaf Nilsson) > 3. Re: Forced derivative interpolation?? (Anne Archibald) > 4. scipy.reddit.com (David Warde-Farley) > 5. numpy.squeeze not squeezing (Bruce Ford) > 6. Re: numpy.squeeze not squeezing (Robert Kern) > 7. Re: numpy.squeeze not squeezing (Bruce Ford) > 8. Re: numpy.squeeze not squeezing (Robert Kern) > 9. Re: numpy.squeeze not squeezing (Bruce Ford) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 30 Sep 2009 13:18:53 -0500 > From: Joe Kington > Subject: Re: [SciPy-User] Forced derivative interpolation?? > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > Hi Marco, > > Exactly what sort of constraints are you wanting to apply? > i.e. > Do you want a specific velocity or acceleration everywhere (and get a least > squares fit to the other parameters)? > Do you want to minimize the acceleration or velocity while still fitting the > data? > Does the interpolation need to fit the data exactly at each point where you > have data, or the best fit between your constraints and the data values? > > Basically, as far as I know, there isn't a pre-built function in scipy to do > what you want, but it's not hard to write code to do what you want. If you > can describe what you need in a bit more detail, I'm pretty sure I can point > you in the right direction. > > -Joe > > On Wed, Sep 30, 2009 at 3:09 AM, Marco Nicoletti < > nicoletti at consorzio-innova.it> wrote: > > > Dear all, > > > > I want to implement a spline interpolation forcing the condition on the > > first or second derivative. > > In other words I have a vector of position (p), velocity (v) and > > acceleration (a) values; > > I want to interpolate the position (p) vector imposing the conditions on > > the velocity and acceleration values. > > > > The class UnivariateSpline() or intrp1D() in scipy.interpolate package > > don't take as parameter the derivatives > > (they export a method to evaluate derivatives). > > > > Any suggestions? > > > > Thanks very much and have a nice day! > > > > Marco Nicoletti > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20090930/6567f5e7/attachment-0001.html > > ------------------------------ > > Message: 2 > Date: Wed, 30 Sep 2009 20:58:46 +0200 > From: Gustaf Nilsson > Subject: [SciPy-User] memoryError when i have plenty of available ram > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Hiya > > I know someone just started a memory thread, but i didnt wanna hijack it.. > My image processing app that im working on seems to crash with "memoryError" > when it hits about 1.1gb of mem usage (same on two computers; has 2/4gb ram, > xp 32bit) > Im working with 12mpixel images at 32bit floating point, so each block of > memory used in different operations is about 140mb (if that helps) > > Is it actually because it runs out of memory or can the error mean something > else? > > cheers > Gusty > -- > ? ? ? ? ? ? ? ? ? ? > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20090930/ef357136/attachment-0001.html > > ------------------------------ > > Message: 3 > Date: Wed, 30 Sep 2009 15:12:40 -0400 > From: Anne Archibald > Subject: Re: [SciPy-User] Forced derivative interpolation?? > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > 2009/9/30 Marco Nicoletti : > > Dear all, > > > > I want to implement a spline interpolation forcing the condition on the > > first or second derivative. > > In other words I have a vector of position (p), velocity (v) and > > acceleration (a) values; > > I want to interpolate the position (p) vector imposing the conditions on the > > velocity and acceleration values. > > > > The class UnivariateSpline() or intrp1D() in scipy.interpolate package don't > > take as parameter the derivatives > > (they export a method to evaluate derivatives). > > > > Any suggestions? > > If I have correctly understood your question, what you want to do is > produce an interpolating spline with not just specified point values > but specified derivative values at the given points. Scipy has at > least two different pieces of code that might help. The first is, in > recent versions of scipy, scipy.interpolate.PiecewisePolynomial. This > allows you to fit a piecewise polynomial through a set of points, > specifying derivatives at each point. It doesn't allow you to impose a > spline-like constraint that higher derivatives must be continuous at > the points. Its evaluation is also implemented in pure python, so it > won't be terribly fast. > > A second option, useful if you need fast evaluation, is to abuse > scipy's spline functions. scipy.interpolate.splrep doesn't take > derivatives, but what it returns is a triple t, c, k. Given a t, c, k, > you can then call splev, splint, splder, etcetera to get nice fast > evaluation in compiled code. So what you can do is fabricate your own > t, c, and k values. t is the list of knots, c is some sort of > coefficients, and k is the order of the spline. The brute-force way I > found to get these splines to produce the derivatives I wanted > required me to repeat values in the t array. But once you've fixed the > t array, the result is linear in the c values, so a little trial and > error will give you formulas to produce any curve you need. > > Good luck, > Anne > > Thanks very much and have a nice day! > > > > Marco Nicoletti > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > ------------------------------ > > Message: 4 > Date: Wed, 30 Sep 2009 16:51:59 -0400 > From: David Warde-Farley > Subject: [SciPy-User] scipy.reddit.com > To: Discussion of Numerical Python , SciPy > Users List , ipython-user at scipy.net > Message-ID: <2F5DD634-70CB-4A5B-ADD8-F3FFCDF41A1B at cs.toronto.edu> > Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes > > In the spirit of the 'advice' site, and given that we're thinking of > moving scipy.org to more static content (once I have some free time on > my hands again, which should be soon!), I set up a 'subreddit' on > reddit.com for Python-in-Science related links. I even came up with a > somewhat spiffy logo for it. > > Think of it as a communal, collaboratively filtered (via up/down > votes, using the arrows next to each submission) bookmarks folder/news > site/etc. > > I'd encourage people to use it and add to it if they feel it might be > of use to the community. > > The address is http://scipy.reddit.com/ , or equivalently http://www.reddit.com/r/scipy > > David > > > > > ------------------------------ > > Message: 5 > Date: Wed, 30 Sep 2009 16:44:29 -0400 > From: Bruce Ford > Subject: [SciPy-User] numpy.squeeze not squeezing > To: scipy-user at scipy.org > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > Squeeze doesn't seem to be squeezing. What am I missing? > > An array extracted from a NetCDF3 file using NetCDF4 is shaped: (248,1,181.360) > > I want it to be shaped (248,181,360) > > out = np.squeeze(in) > print out.shape > > yeilds () > > Am I missing a step? > > Any assistance would be appreciated! > > Bruce > --------------------------------------- > Bruce W. Ford > Clear Science, Inc. > bruce at clearscienceinc.com > > > ------------------------------ > > Message: 6 > Date: Wed, 30 Sep 2009 16:16:39 -0500 > From: Robert Kern > Subject: Re: [SciPy-User] numpy.squeeze not squeezing > To: SciPy Users List > Message-ID: > <3d375d730909301416w4eb6dbbei9b47a22bd141b91a at mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > On Wed, Sep 30, 2009 at 15:44, Bruce Ford wrote: > > Squeeze doesn't seem to be squeezing. ?What am I missing? > > > > An array extracted from a NetCDF3 file using NetCDF4 is shaped: ?(248,1,181.360) > > > > I want it to be shaped (248,181,360) > > > > out = np.squeeze(in) > > print out.shape > > > > yeilds ?() > > > > Am I missing a step? > > It works for me: > > In [1]: x = np.empty((248,1,181,360)) > > In [2]: np.squeeze(x).shape > Out[2]: (248, 181, 360) > > Can you give us a minimal, self-contained script that demonstrates the > problem? Being self-contained will probably be impossible, but even > seeing such a minimal script will be helpful even if we can't run it > with your data file. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > > > ------------------------------ > > Message: 7 > Date: Wed, 30 Sep 2009 18:03:25 -0400 > From: Bruce Ford > Subject: Re: [SciPy-User] numpy.squeeze not squeezing > To: SciPy Users List > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > Robert, thanks for responding. Your response makes me things > something in my array is preventing squeeze form working correctly. > Here's a cleaned up version of my script: > > > #!/usr/local/env python > from mpl_toolkits.basemap import Basemap > import numpy as np #used to preform simple math functions on data > from netCDF4 import Dataset > #decide which file to open > year = 1995 > month = "%02d" % 5 > > #Set up file names > filename = "/data/ww3/NetCDF/3_hourly/ww3."+str(year)+str(month)+ ".nc" > opennc = Dataset(filename, mode="r") > > swh = opennc.variables['sig_wav_ht'] > print swh.shape #gives (248,1,181,360) > swh1 = np.squeeze(swh) > > print 'SWH shape: ', swh1.shape #gives () > x = np.zeros((248,1,181,360)) > y = np.squeeze(x) > print y.shape #give (248,181,369) > > > --------------------------------------- > Bruce W. Ford > Clear Science, Inc. > bruce at clearscienceinc.com > bruce.w.ford.ctr at navy.smil.mil > http://www.ClearScienceInc.com > Phone/Fax: 904-379-9704 > 8241 Parkridge Circle N. > Jacksonville, FL 32211 > Skype: bruce.w.ford > Google Talk: fordbw at gmail.com > > > > On Wed, Sep 30, 2009 at 5:16 PM, Robert Kern wrote: > > On Wed, Sep 30, 2009 at 15:44, Bruce Ford wrote: > >> Squeeze doesn't seem to be squeezing. ?What am I missing? > >> > >> An array extracted from a NetCDF3 file using NetCDF4 is shaped: ?(248,1,181.360) > >> > >> I want it to be shaped (248,181,360) > >> > >> out = np.squeeze(in) > >> print out.shape > >> > >> yeilds ?() > >> > >> Am I missing a step? > > > > It works for me: > > > > In [1]: x = np.empty((248,1,181,360)) > > > > In [2]: np.squeeze(x).shape > > Out[2]: (248, 181, 360) > > > > Can you give us a minimal, self-contained script that demonstrates the > > problem? Being self-contained will probably be impossible, but even > > seeing such a minimal script will be helpful even if we can't run it > > with your data file. > > > > -- > > Robert Kern > > > > "I have come to believe that the whole world is an enigma, a harmless > > enigma that is made terrible by our own mad attempt to interpret it as > > though it had an underlying truth." > > ?-- Umberto Eco > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > ------------------------------ > > Message: 8 > Date: Wed, 30 Sep 2009 17:05:58 -0500 > From: Robert Kern > Subject: Re: [SciPy-User] numpy.squeeze not squeezing > To: SciPy Users List > Message-ID: > <3d375d730909301505q221473d6i450fcb23b14cfd58 at mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > On Wed, Sep 30, 2009 at 17:03, Bruce Ford wrote: > > Robert, thanks for responding. ?Your response makes me things > > something in my array is preventing squeeze form working correctly. > > Here's a cleaned up version of my script: > > > > > > #!/usr/local/env python > > from mpl_toolkits.basemap import Basemap > > import numpy as np #used to preform simple math functions on data > > from netCDF4 import Dataset > > #decide which file to open > > year = 1995 > > month = "%02d" % 5 > > > > #Set up file names > > filename = "/data/ww3/NetCDF/3_hourly/ww3."+str(year)+str(month)+ ".nc" > > opennc = Dataset(filename, mode="r") > > > > swh = opennc.variables['sig_wav_ht'] > > print swh.shape ?#gives (248,1,181,360) > > swh1 = np.squeeze(swh) > > > > print 'SWH shape: ', swh1.shape ?#gives () > > print type(swh1) > > I'm not sure that swh1 is actually an ndarray. It might be a different > class that masquerades as a numpy array. > From timmichelsen at gmx-topmail.de Wed Oct 7 05:01:06 2009 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Wed, 7 Oct 2009 09:01:06 +0000 (UTC) Subject: [SciPy-User] satellite imagery References: <91d218430908171353j3d4b43ectb44001e3b39fd9@mail.gmail.com> Message-ID: FYI: A new mailing list was created for the Python & geoprocessing stuff: Unofficial Python GIS SIG http://groups.google.com/group/python-gis-sig and a nice tutorial was posted on that list, too: Geoprocessing with Python using Open Source GIS http://www.gis.usu.edu/~chrisg/python/ Best, Timmie From denis-bz-gg at t-online.de Wed Oct 7 05:46:53 2009 From: denis-bz-gg at t-online.de (denis) Date: Wed, 7 Oct 2009 02:46:53 -0700 (PDT) Subject: [SciPy-User] a small example of scipy.ndimage.map_coordinates In-Reply-To: References: <74bd6125-954d-4ae3-bef3-791a81b0fb67@y21g2000yqn.googlegroups.com> Message-ID: Thanks Ralf, Thanks David, I've put the note, with your suggestions, in http://advice.mechanicalkern.com/question/17/getting-started-with-2d-interpolation-in-scipy I think a collection of getting-started / microtutorials, easier than the Cookbook, might be useful; and I like the rating / comment system in mechanicalkern / stackoverflow. We'll see if "answers from that site [mechanicalkern] get integrated in the reference/user guides where appropriate." If not, please advise. Is the editor in mechanicalkern the same as the one in the doc wiki ? Is there a local version of either, to first compose locally ? (OK I'll go over to scipy-dev but it's not in google groups and the Gmane nntp reader seems to be broken on my mac, what do you two use ?) cheers -- denis From ralf.gommers at googlemail.com Wed Oct 7 06:25:45 2009 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 7 Oct 2009 12:25:45 +0200 Subject: [SciPy-User] a small example of scipy.ndimage.map_coordinates In-Reply-To: References: <74bd6125-954d-4ae3-bef3-791a81b0fb67@y21g2000yqn.googlegroups.com> Message-ID: On Wed, Oct 7, 2009 at 11:46 AM, denis wrote: > Thanks Ralf, Thanks David, > I've put the note, with your suggestions, in > > http://advice.mechanicalkern.com/question/17/getting-started-with-2d-interpolation-in-scipy > > I think a collection of getting-started / microtutorials, easier than > the Cookbook, > might be useful; and I like the rating / comment system in > mechanicalkern / stackoverflow. > We'll see if > "answers from that site [mechanicalkern] get integrated in the > reference/user guides where appropriate." > If not, please advise. > > Is the editor in mechanicalkern the same as the one in the doc wiki ? > No. The doc wiki editor accepts reST, the one in mechanicalkern accepts Markdown (I think, but that should really be stated somewhere). Is there a local version of either, to first compose locally ? > > Any decent text editor will do, as long as you know how to use the markup. There's no local renderer as far as I know. > (OK I'll go over to scipy-dev > but it's not in google groups and the Gmane nntp reader seems to be > broken on my mac, > what do you two use ?) > > I just read it in gmail. Cheers, Ralf > cheers > -- denis > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Wed Oct 7 07:05:46 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 7 Oct 2009 13:05:46 +0200 Subject: [SciPy-User] ANN: Image Processing SciKit In-Reply-To: References: <9457e7c80909241043h6b94c18q5e4fb8ccc5cdc657@mail.gmail.com> Message-ID: <9457e7c80910070405i489879fdu3587e64063203064@mail.gmail.com> Hi David 2009/9/26 David Warde-Farley : > This is exactly the kind of low-hanging fruit a SciPy/scikits/open > source newcomer (or long-time user, first-time contributor) could do > to get their feet wet, by the way :) ?It's basically a matter of Thanks for providing instructions on how to contribute to scikits.image. I've merged these with the other tasks, now available at http://stefanv.github.com/scikits.image/contribute.html Cheers St?fan From giorgio.luciano at inwind.it Wed Oct 7 09:30:08 2009 From: giorgio.luciano at inwind.it (giorgio.luciano at inwind.it) Date: Wed, 7 Oct 2009 15:30:08 +0200 Subject: [SciPy-User] Response surface methodology Message-ID: Does anyone has some python code (or eventually R code that can be imported in python) related to Response Surface Methodology ? I've tried to dig around but nothing seems available http://en.wikipedia.org/wiki/Response_surface_methodology Thanks in advance for any suggestions Giorgio From bsouthey at gmail.com Wed Oct 7 09:40:04 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 7 Oct 2009 08:40:04 -0500 Subject: [SciPy-User] curve_fit and least squares In-Reply-To: <1cd32cbb0910062319o2c313f94nb436b04a6da3b4da@mail.gmail.com> References: <9d5f455c0910062236p48a66705j8bd55484ebb04f12@mail.gmail.com> <1cd32cbb0910062319o2c313f94nb436b04a6da3b4da@mail.gmail.com> Message-ID: On Wed, Oct 7, 2009 at 1:19 AM, wrote: > On Wed, Oct 7, 2009 at 1:36 AM, Kris Maynard wrote: >> I am having trouble with fitting data to an exponential curve. I have an x-y >> data series that I would like to fit to an exponential using least squares >> and have access to the covariance matrix of the result. I summarize my >> problem in the following example: >> >> import numpy as np >> import scipy as sp >> from scipy.optimize.minpack import curve_fit >> >> A, B = 5, 0.5 >> x = np.linspace(0, 5, 10) >> real_f = lambda x: A * np.exp(-1.0 * B * x) >> y = real_f(x) >> ynoisy = y + 0.01 * np.random.randn(len(x)) >> >> exp_f = lambda x, a, b: a * np.exp(-1.0 * b * x) >> >> # this line raises the error: >> >> #? RuntimeError: Optimal parameters not found: Both >> >> #? actual and predicted relative reductions in the sum of squares >> >> #? are at most 0.000000 and the relative error between two >> >> #? consecutive iterates is at most 0.000000 >> >> params, cov = curve_fit(exp_f, x, ynoisy) > Could you please first plot your data? As you would see, the curve is very poorly defined with those model parameters and range. So you are asking a lot from your model and data. At least you need a wider range with those parameters or Josef says different parameter(s): > this might be the same as ?http://projects.scipy.org/scipy/ticket/984 and > http://mail.scipy.org/pipermail/scipy-user/2009-August/022090.html > > If I increase your noise standard deviation from 0.1 to 0.2 then I do get > correct estimation results in your example. > >> >> I have tried to use the minpack.leastsq function directly with similar >> results. I also tried taking the log and fitting to a line with no success. >> The results are the same using scipy 0.7.1 as well as 0.8.0.dev5953. Am I >> not using the curve_fit function correctly? > > With ? minpack.leastsq ? error code 2 should be just a warning. If you get > incorrect parameter estimates with optimize.leastsq, besides the warning, could > you post the example so I can have a look. > > It looks like if you take logs then you would have a problem that is linear in > (transformed) parameters, where you could use linear least squares if you > just want a fit without the standard errors of the original parameters > (constant) The errors will be multiplicative rather than additive. Bruce > > I hope that helps. > > Josef > > >> Thanks, >> ~Kris >> -- >> Heisenberg went for a drive and got stopped by a traffic cop. The cop asked, >> "Do you know how fast you were going?" Heisenberg replied, "No, but I know >> where I am." >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Hi, From lev at columbia.edu Wed Oct 7 10:42:39 2009 From: lev at columbia.edu (Lev Givon) Date: Wed, 7 Oct 2009 10:42:39 -0400 Subject: [SciPy-User] ANN: Image Processing SciKit In-Reply-To: <9457e7c80909241043h6b94c18q5e4fb8ccc5cdc657@mail.gmail.com> References: <9457e7c80909241043h6b94c18q5e4fb8ccc5cdc657@mail.gmail.com> Message-ID: <20091007144239.GB26088@localhost.columbia.edu> Received from St?fan van der Walt on Thu, Sep 24, 2009 at 01:43:45PM EDT: > Hi all, > > After a short sprint at SciPy 2009, we've put together the > infrastructure for an Image Processing SciKit. The source code [1] > and documentatin [2] is available online. WIth the infrastructure in > place, the next focus will be on getting contributions (listed at [3]) > merged. > > If you have code for generally useful image processing algorithms > available, please consider contributing. Feel free to join further > discussions on the scikit mailing list [4]. > > Kind regards > St?fan Would it be possible to create a 0.1 tag so that users know what commit "version 0.1" corresponds to? L.G. From robert.kern at gmail.com Wed Oct 7 11:14:40 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 7 Oct 2009 10:14:40 -0500 Subject: [SciPy-User] *****SPAM***** SciPy-User Digest, Vol 73, Issue 54 In-Reply-To: <1254901461.7167.10.camel@nicolettiws> References: <1254901461.7167.10.camel@nicolettiws> Message-ID: <3d375d730910070814w4dca8d8ci4be06f9ba3218ca3@mail.gmail.com> If you want to reply to messages on this mailing list, please subscribe regularly and do not use the Digest. If you feel that you must use the Digest, please trim the quoted material to just the message you are responding to and mimic the correct Subject line appropriately. Thanks. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dominique.orban at gmail.com Wed Oct 7 11:18:31 2009 From: dominique.orban at gmail.com (dpo) Date: Wed, 7 Oct 2009 08:18:31 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Custom sections in site.cfg Message-ID: <25788619.post@talk.nabble.com> Hi all, Is it possible / easy to add custom sections to a site.cfg for a project that relies upon Numpy? I need BLAS, LAPACK etc., and Numpy distutils lets me grab those conveniently from site.cfg but I'd like to also add a few extra sections. Thanks for any pointer, suggestion, or example! -- View this message in context: http://www.nabble.com/Custom-sections-in-site.cfg-tp25788619p25788619.html Sent from the Scipy-User mailing list archive at Nabble.com. From Dharhas.Pothina at twdb.state.tx.us Wed Oct 7 11:25:37 2009 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 07 Oct 2009 10:25:37 -0500 Subject: [SciPy-User] Suggestion for numpy.genfromtxt documentation Message-ID: <4ACC6CA1.63BA.009B.0@twdb.state.tx.us> Hi, It took me a while and a lot of trial and error to work out why this didn't work as expected. data = np.genfromtxt(fname,usecols=(2,3,4),names='x,y,z') this command works and does not return any warnings or errors, but returns an numpy array with no field names. If you use: data = np.genfromtxt(fname,usecols=(2,3,4),dtype=None,names='x,y,z') then the command does what I expect it to and returns a structured numpy array with field names. So essentially, the 'names' argument doesn't not work unless you also specify the 'dtype' argument. I think, it would be less confusing to new users to either have this explicitly mentioned in the documentation string for the genfromtxt 'names' argument or to have the function default to 'dtype=None' if the 'names' argument is specified without specifying the 'dtype' argument. - dharhas From jsseabold at gmail.com Wed Oct 7 11:52:29 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Wed, 7 Oct 2009 11:52:29 -0400 Subject: [SciPy-User] Suggestion for numpy.genfromtxt documentation In-Reply-To: <4ACC6CA1.63BA.009B.0@twdb.state.tx.us> References: <4ACC6CA1.63BA.009B.0@twdb.state.tx.us> Message-ID: On Wed, Oct 7, 2009 at 11:25 AM, Dharhas Pothina wrote: > Hi, > > It took me a while and a lot of trial and error to work out why this didn't work as expected. > > data = np.genfromtxt(fname,usecols=(2,3,4),names='x,y,z') > > this command works and does not return any warnings or errors, but returns an numpy array with no field names. If you use: > > data = np.genfromtxt(fname,usecols=(2,3,4),dtype=None,names='x,y,z') > > then the command does what I expect it to and returns a structured numpy array with field names. So essentially, the 'names' argument doesn't not work unless you also specify the 'dtype' argument. > > I think, it would be less confusing to new users to either have this explicitly mentioned in the documentation string for the genfromtxt 'names' argument or to have the function default to 'dtype=None' ?if the 'names' argument is specified without specifying the 'dtype' argument. > > - dharhas I came across this behavior recently and agree with you. There is a patch in the works for this. See this thread: http://thread.gmane.org/gmane.comp.python.numeric.general/33479 And this ticket: http://projects.scipy.org/numpy/ticket/1252 Cheers, Skipper From d.l.goldsmith at gmail.com Wed Oct 7 14:35:00 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Wed, 7 Oct 2009 11:35:00 -0700 Subject: [SciPy-User] a small example of scipy.ndimage.map_coordinates In-Reply-To: References: <74bd6125-954d-4ae3-bef3-791a81b0fb67@y21g2000yqn.googlegroups.com> Message-ID: <45d1ab480910071135k51b6296dm24e5d41bbfc3ac52@mail.gmail.com> On Wed, Oct 7, 2009 at 3:25 AM, Ralf Gommers wrote: > > On Wed, Oct 7, 2009 at 11:46 AM, denis wrote: > >> Thanks Ralf, Thanks David, >> I've put the note, with your suggestions, in >> >> http://advice.mechanicalkern.com/question/17/getting-started-with-2d-interpolation-in-scipy >> >> I think a collection of getting-started / microtutorials, easier than >> the Cookbook, >> might be useful; and I like the rating / comment system in >> mechanicalkern / stackoverflow. >> We'll see if >> "answers from that site [mechanicalkern] get integrated in the >> reference/user guides where appropriate." >> If not, please advise. >> >> Is the editor in mechanicalkern the same as the one in the doc wiki ? >> > > No. The doc wiki editor accepts reST, the one in mechanicalkern accepts > Markdown (I think, but that should really be stated somewhere). > Note: our wiki doc editor is a custom django-based app - pydocweb - written by several of our scipy developers, now largely maintained by Pauli Vertanin. Is there a local version of either, to first compose locally ? >> >> Any decent text editor will do, as long as you know how to use the markup. > There's no local renderer as far as I know. > If you really, really want a renderer on your local machine, you can run pydocweb on your machine, and it will render the reST mark-up (as well as provide you with all its other functionality as well). However, pydocweb has a Preview function, and it doesn't take too long to learn the reST basics, so most (all, I'd guess) of us simply edit in a basic text editor, cut-n-paste our work into the editing panel pydocweb provides (I actually edit right in there, but don't tell anybody, ;-) 'cause nominally we discourage that), preview to check our mark-up, perform any corrections, and then save our work in pydocweb. > > >> (OK I'll go over to scipy-dev >> but it's not in google groups and the Gmane nntp reader seems to be >> broken on my mac, >> what do you two use ?) >> >> I just read it in gmail. > Ditto. (Gmail automatically threads your inbox, if that's what you're afraid you'd miss). DG > > Cheers, > Ralf > > >> cheers >> -- denis >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Wed Oct 7 15:20:18 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 07 Oct 2009 14:20:18 -0500 Subject: [SciPy-User] Suggestion for numpy.genfromtxt documentation In-Reply-To: References: <4ACC6CA1.63BA.009B.0@twdb.state.tx.us> Message-ID: <4ACCE9F2.4090002@gmail.com> On 10/07/2009 10:52 AM, Skipper Seabold wrote: > On Wed, Oct 7, 2009 at 11:25 AM, Dharhas Pothina > wrote: > >> Hi, >> >> It took me a while and a lot of trial and error to work out why this didn't work as expected. >> >> data = np.genfromtxt(fname,usecols=(2,3,4),names='x,y,z') >> >> this command works and does not return any warnings or errors, but returns an numpy array with no field names. If you use: >> >> data = np.genfromtxt(fname,usecols=(2,3,4),dtype=None,names='x,y,z') >> >> then the command does what I expect it to and returns a structured numpy array with field names. So essentially, the 'names' argument doesn't not work unless you also specify the 'dtype' argument. >> What did you actually expect? It would be very informative if you could provide a simple example of this for testing. There are many combinations of arguments so not all have been tested and it is not always clear what the expected behavior should be. >> I think, it would be less confusing to new users to either have this explicitly mentioned in the documentation string for the genfromtxt 'names' argument or to have the function default to 'dtype=None' if the 'names' argument is specified without specifying the 'dtype' argument. >> >> - dharhas >> > I came across this behavior recently and agree with you. There is a > patch in the works for this. > > See this thread: http://thread.gmane.org/gmane.comp.python.numeric.general/33479 > > And this ticket: http://projects.scipy.org/numpy/ticket/1252 > > Cheers, > > Skipper > From the numpy help, there is this example: data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'), ('mystring','S5')], delimiter=",") It does not help that the dtype of structured arrays also includes the actual name. So I do not think we can use dtype argument without using the combination of dtype and name. Perhaps if dtype is split into names and formats so that dtype=('name', 'format'). In some sense you are suggesting that we should have something like: Ignore the use of None and True for dtype and names arguments: i) If only dtype is only specified then use the specified dtype and add default names such as col1, col2,... if necessary ii) If names is only specified then contruct the dtype as ('name', 'default format') iii) If formats is only specified then construct the dtype as ('default name', 'format') iv) If only names and formats are only specified then construct the dtype as ('name', 'format') v) If no dtype, names and formats are only specified then construct the dtype as ('default name', 'default format') vi) If dtype and names or formats are specified then use dtype if it is of the form ('name', 'format') or use one of the previous cases. When dtype is None this implies format is None so the format is obtained from the data. If names is not True then the names are either from the argument or default values. If names argument is True then the names should be read from the data and one of the previous cases apply. Bruce From pav at iki.fi Wed Oct 7 15:28:38 2009 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 07 Oct 2009 22:28:38 +0300 Subject: [SciPy-User] a small example of scipy.ndimage.map_coordinates In-Reply-To: References: <74bd6125-954d-4ae3-bef3-791a81b0fb67@y21g2000yqn.googlegroups.com> Message-ID: <1254943717.4757.19.camel@idol> ke, 2009-10-07 kello 02:46 -0700, denis kirjoitti: [clip: doc wiki] > Is there a local version of either, to first compose locally ? There's a RST previewer/editor that understands Sphinx syntax somewhere -- IIRC somewhere in Enthought's SVN repository. I also remember some talk of another editor/previewer on planet.python.org from some time ago. -- Pauli Virtanen From gokhansever at gmail.com Wed Oct 7 15:32:23 2009 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Wed, 7 Oct 2009 14:32:23 -0500 Subject: [SciPy-User] a small example of scipy.ndimage.map_coordinates In-Reply-To: <1254943717.4757.19.camel@idol> References: <74bd6125-954d-4ae3-bef3-791a81b0fb67@y21g2000yqn.googlegroups.com> <1254943717.4757.19.camel@idol> Message-ID: <49d6b3500910071232k5ff25d19mdc100dc54aadb709@mail.gmail.com> On Wed, Oct 7, 2009 at 2:28 PM, Pauli Virtanen wrote: > ke, 2009-10-07 kello 02:46 -0700, denis kirjoitti: > [clip: doc wiki] > > Is there a local version of either, to first compose locally ? > > There's a RST previewer/editor that understands Sphinx syntax somewhere > -- IIRC somewhere in Enthought's SVN repository. ETS_3.3.1/AppTools_3.3.1/enthought/rst/app.py > I also remember some > talk of another editor/previewer on planet.python.org from some time > ago. > > -- > Pauli Virtanen > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Wed Oct 7 16:22:18 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Wed, 7 Oct 2009 16:22:18 -0400 Subject: [SciPy-User] Suggestion for numpy.genfromtxt documentation In-Reply-To: <4ACCE9F2.4090002@gmail.com> References: <4ACC6CA1.63BA.009B.0@twdb.state.tx.us> <4ACCE9F2.4090002@gmail.com> Message-ID: On Wed, Oct 7, 2009 at 3:20 PM, Bruce Southey wrote: > On 10/07/2009 10:52 AM, Skipper Seabold wrote: >> On Wed, Oct 7, 2009 at 11:25 AM, Dharhas Pothina >> ?wrote: >> >>> Hi, >>> >>> It took me a while and a lot of trial and error to work out why this didn't work as expected. >>> >>> data = np.genfromtxt(fname,usecols=(2,3,4),names='x,y,z') >>> >>> this command works and does not return any warnings or errors, but returns an numpy array with no field names. If you use: >>> >>> data = np.genfromtxt(fname,usecols=(2,3,4),dtype=None,names='x,y,z') >>> >>> then the command does what I expect it to and returns a structured numpy array with field names. So essentially, the 'names' argument doesn't not work unless you also specify the 'dtype' argument. >>> > What did you actually expect? > It would be very informative if you could provide a simple example of > this for testing. > > There are many combinations of arguments so not all have been tested and > it is not always clear what the expected behavior should be. > >>> I think, it would be less confusing to new users to either have this explicitly mentioned in the documentation string for the genfromtxt 'names' argument or to have the function default to 'dtype=None' ?if the 'names' argument is specified without specifying the 'dtype' argument. >>> >>> - dharhas >>> >> I came across this behavior recently and agree with you. ?There is a >> patch in the works for this. >> >> See this thread: http://thread.gmane.org/gmane.comp.python.numeric.general/33479 >> >> And this ticket: http://projects.scipy.org/numpy/ticket/1252 >> >> Cheers, >> >> Skipper >> > > ?From the numpy help, there is this example: > data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'), > ('mystring','S5')], delimiter=",") > These examples got added recently, so it may not be in your version of numpy if you haven't updated. You can see them here: http://docs.scipy.org/numpy/docs/numpy.lib.io.genfromtxt/ > It does not help that the dtype of structured arrays also includes the > actual name. So I do not think we can use dtype argument without using > the combination of dtype and name. Perhaps if dtype is split into names > and formats so that dtype=('name', 'format'). > In the first example above, since float is the default for dtype it's really dtype=float, and names=[...]. Names doesn't get used and it returns a plain ndarray. All that it would take is zipping float with each of the names so that it's a valid dtype. Right now, you could do dtype="f, f, f" or whatever and names = ['var1','var2',var3']. In the second example dtype = None determines the actual format of the data from the data itself and constructs the dtype. > In some sense you are suggesting that we should have something like: > > Ignore the use of None and True for dtype and names arguments: I don't think I (at least) am suggesting to ignore anything from the user. > i) If only dtype is only specified then use the specified dtype and add > default names such as col1, col2,... if necessary > This is what happens right now. But f0, f1, ... instead of col. > ii) If names is only specified then contruct the dtype as ('name', > 'default format') Or whatever is passed to dtype. See above. > iii) If formats is only specified then construct the dtype as ('default > name', 'format') What is formats? This is the same case as i? Are you suggesting adding a formats keyword? I suggested `type` to distinguish between a real dtype and this non-standard behavior that's being proposed now, but Pierre doesn't seem to think it's necessary, and I guess I agree as long as new users don't get too confused by this and it's documented as non-standard. > iv) If only names and formats are only specified then construct the > dtype as ('name', 'format') > > v) If no dtype, names and formats are only specified then construct the > dtype as ('default name', 'default format') > > vi) If dtype and names or formats are specified then use dtype if it is > of the form ('name', 'format') or use one of the previous cases. > > When dtype is None this implies format is None so the format is obtained > from the data. If names is not True then the names are either from the > argument or default values. > > If names argument is True then the names should be read from the data > and one of the previous cases apply. > I think I agree with this, except I don't think the `format` keyword is totally necessary. Basically, I want to leave the behavior as is, but if names is True or a sequence, then they're never ignored and the dtype is constructed for the user as "expected". Skipper From stefan at sun.ac.za Wed Oct 7 18:26:58 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 8 Oct 2009 00:26:58 +0200 Subject: [SciPy-User] ANN: Image Processing SciKit In-Reply-To: <20091007144239.GB26088@localhost.columbia.edu> References: <9457e7c80909241043h6b94c18q5e4fb8ccc5cdc657@mail.gmail.com> <20091007144239.GB26088@localhost.columbia.edu> Message-ID: <9457e7c80910071526y5208817byd35979c2a8c0c382@mail.gmail.com> 2009/10/7 Lev Givon : > Would it be possible to create a 0.1 tag so that users know what > commit "version 0.1" corresponds to? Sure, done! St?fan From cool-rr at cool-rr.com Thu Oct 8 03:34:18 2009 From: cool-rr at cool-rr.com (cool-RR) Date: Thu, 8 Oct 2009 09:34:18 +0200 Subject: [SciPy-User] Simulations In-Reply-To: <4ACB9916.10004@telenet.be> References: <4ACA44C4.5090003@telenet.be> <4ACB9916.10004@telenet.be> Message-ID: Hey Cesar and David, I thought about this and I think I better stick to the "Do one thing well" principle for now. Thanks for the insight though. Ram. On Tue, Oct 6, 2009 at 9:23 PM, Cesar Koers wrote: > Hi Ram, > > > A specific type of time domain solver for electromagnetics is e.g. TDFD > = time domain finite difference. It is based on discretizing Maxwell > equations in time & space > > But the Maxwell equations can also be expressed in the frequency domain > (thus for every frequency instead of every time instant). This leads to > FDFD = frequency domain finite difference. > > Other kinds of models, like based on finite elements (FE) can also be > developed in the frequency domain. > > Perhaps you're now thinking that your 'step function' would still work > in the frequency domain (response at frequency f_{i+} as a function of > response at frequency f_i), but this doesn't work (to my knowledge) > because it requires that the system is modeled by differential equations > in the frequency domain (haven't encountered this before) > > > Ah and some other 'bureaucracy' features: > * tracking time spent / calculating time remaining till end > * refining/coarsening time step to improve accuracy/reduce simulation > time respectively > > best regards > > C > > > cool-RR wrote: > > Hey Cesar, > > > > Your comments are interesting. > > > > Can you explain to me a bit about frequency domain simulations? Can you > > give an example of a simulation simulating a real world process? > > > > I agree that GarlicSim must handle the bureaucracy well, as its job is > > to let the user write a simulation with as little bureaucracy as > possible. > > > > P.S. I registered garlicsim.org and it is now the > > main domain. > > > > Ram. > > > > On Mon, Oct 5, 2009 at 9:11 PM, Cesar Koers > > wrote: > > > > Hi Ram, > > > > > > I quickly read through your intro doc, I think you've explained your > > idea quite well. > > > > One remarks though: I think your framework would fit well to > time-domain > > (transient) models. But at this moment I don't see how you could cast > a > > frequency domain simulation (commonly used in EM solvers) in it. I'd > be > > careful with the idea that 'all simulations' fit into this. > > > > What I think is key to success of this kind of framework is how well > it > > handles the 'bureaucracy' of performing simulations (and speed, but > > you've already mentioned that the actual number crunching is up to > the > > user of the GarlicSim). With this, I mean the boring stuff, like > e.g.: > > > > * keeping track of which parameters vary between simulations > > * extracting data from a set of simulations as a function of one of > > these parameters > > * storing (and backing up) simulation results without taking up too > much > > space and needing to invent unique and descriptive file names > > * being able to redo a simulation (storing simulation parameters with > > results) > > * making simulation reports > > * comparing results with real-world data > > * for long simulations, being able to continue simulation after a > crash > > > > Just my 2 cents > > > > Best regards > > > > C > > > > > > cool-RR wrote: > > > Hello, > > > > > > This is not directly related to SciPy; I'm posting it here because > I > > > figure that there may be people here who know the scientific > > computing > > > world enough to help me with my question. > > > > > > I've been working on an open-source scientific computing project > for > > > about 6 months now, and I've come to the conclusion that it's > > about time > > > to find other users except myself for it, so I may get valuable > > feedback > > > about which direction I should be taking this project. > > > > > > The project is called GarlicSim (http://garlicsim.com > > > ). It's a Pythonic platform for working > with > > > simulations. You may read more about it on the webpage. In short, > > it's a > > > very general framework for creating, running and analyzing > > simulations. > > > It's not specific to any scientific field; Its role is to provide > a > > > general mold into which all simulations can be cast. If you want > > to know > > > more about it you can also read a (yet-incomplete) introduction > > > > > < > http://dl.getdropbox.com/u/1927707/Introduction%20to%20GarlicSim.doc> > > to > > > it. > > > > > > So what I want to know is, who would be good potential first > > users for > > > this, and how could I reach them? > > > I'm not even sure which scientific field I would like to target, > so > > > please suggest. > > > > > > > > > Thanks, > > > Ram Rachum > > > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > _______________________________________________ > > > SciPy-User mailing list > > > SciPy-User at scipy.org > > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > > Gaetan Cesar Koers > > Kerkveldweg 82 > > 1851 Humbeek > > +32(0)486 20 11 16 > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > -- > > Sincerely, > > Ram Rachum > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- > Gaetan Cesar Koers > Kerkveldweg 82 > 1851 Humbeek > +32(0)486 20 11 16 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Sincerely, Ram Rachum -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgomezdans at gmail.com Thu Oct 8 08:42:37 2009 From: jgomezdans at gmail.com (Jose Gomez-Dans) Date: Thu, 8 Oct 2009 13:42:37 +0100 Subject: [SciPy-User] ndimage aware of masked arrays? Message-ID: <91d218430910080542y1a7b56abi4b322b45a6da8af0@mail.gmail.com> Hi! I would like to apply a gaussian filter to a 2d (masked) array, but I would only like to consider unmasked values (so that whenever it is calculating the output for a given pixel, only pixels in the neighbourhood with values unmasked are taken into account in the calculation). However, ndimage.filters.gaussian_filter seems to be unaware of masked arrays: >>> type( arr1 ) >>> gauss_arr1 = filters.gaussian_filter( arr1, (20,20) ) Is there some simple way of doing this? Thanks! J -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitinchandra1 at gmail.com Thu Oct 8 15:45:44 2009 From: nitinchandra1 at gmail.com (nitin chandra) Date: Fri, 9 Oct 2009 01:15:44 +0530 Subject: [SciPy-User] NumPy error In-Reply-To: <4ACAE0AF.2060001@ar.media.kyoto-u.ac.jp> References: <965122bf0910051226p6f7ad94cx7e4e28a3dd061a5d@mail.gmail.com> <4ACAE0AF.2060001@ar.media.kyoto-u.ac.jp> Message-ID: <965122bf0910081245g75b6da5dg3c50a6d87ea7e689@mail.gmail.com> Hello Everyone , David As per your suggested steps i did the installation procedure. But no luck ..... there is the same error .... Still there could be a probability that i have done some thing unwanted during my installation... please look at the file attached. As i cannot past the whole lot in the message .. due to limitations.... again .... please have a look at the attached file. ERROR [root at mi ~]# python Python 2.6.2 (r262:71600, Sep 28 2009, 21:33:37) [GCC 4.3.2 20081105 (Red Hat 4.3.2-7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy Traceback (most recent call last): File "", line 1, in File "/opt/python262/lib/python2.6/site-packages/numpy/__init__.py", line 130, in import add_newdocs File "/opt/python262/lib/python2.6/site-packages/numpy/add_newdocs.py", line 9, in from lib import add_newdoc File "/opt/python262/lib/python2.6/site-packages/numpy/lib/__init__.py", line 13, in from polynomial import * File "/opt/python262/lib/python2.6/site-packages/numpy/lib/polynomial.py", line 18, in from numpy.linalg import eigvals, lstsq File "/opt/python262/lib/python2.6/site-packages/numpy/linalg/__init__.py", line 47, in from linalg import * File "/opt/python262/lib/python2.6/site-packages/numpy/linalg/linalg.py", line 22, in from numpy.linalg import lapack_lite ImportError: liblapack.so: cannot open shared object file: No such file or directory >>> I did NOT INSTALL XBLAS. But will need FFTW (GIS??). Thanks Nitin -------------- next part -------------- start with # yum install tcl-devel and #yum install tk-devel Then python-2.6.2 #./configure --prefix=/opt/python25 --enable-shared --with-threads --enable-ipv6 \ --with-pth --enable-unicode --with-libm=/lib/libm-2.9.so --with-libc=/lib/libc-2.9.so ;;; --without-gcc ;; removes GNU License ;;; --with-cxx-main= ;; Boost shoudl (?) include BSD license ;;; ;;; Above Still needs to be clarified ... can this be done ? will it work? ;;;FOR 64bit ;;; --with-universal-arch=64-bit ;;; ;;; CPPFLAGS C/C++/Objective C preprocessor flags, e.g. -I if ;;; you have headers in a nonstandard directory ;;; CPP C preprocessor ;;; If we can make a SPEC file after giving all the above variables during ;;; ./configure, then we can do an RPM rebuild and have other programmes also ;;; installed for this version. ============================ UNCOMMNET THESE LINES IN "Modules/Setup" AFTER RUNNING THE ABOVE LINE. [PYTHON UNTAR DIR]# #joe / vi Modules/Setup # *** Always uncomment this (leave the leading underscore in!): _tkinter _tkinter.c tkappinit.c -DWITH_APPINIT \ # *** Uncomment and edit to reflect where your Tcl/Tk libraries are: -L/usr/lib \ # *** Uncomment and edit to reflect where your Tcl/Tk headers are: -I/usr/include \ # *** Uncomment and edit to reflect where your X11 header files are: -I/usr/X11R6/include \ # *** Or uncomment this for Solaris: # -I/usr/openwin/include \ # *** Uncomment and edit for Tix extension only: -DWITH_TIX -ltix \ # *** Uncomment and edit for BLT extension only: # -DWITH_BLT -I/usr/local/blt/blt8.0-unoff/include -lBLT8.0 \ # *** Uncomment and edit for PIL (TkImaging) extension only: # (See http://www.pythonware.com/products/pil/ for more info) -DWITH_PIL -I../Imaging-1.1.6/libImaging tkImaging.c \ ========================================= # Modules that should always be present (non UNIX dependent): array arraymodule.c # array objects cmath cmathmodule.c # -lm # complex math library functions math mathmodule.c # -lm # math library functions, e.g. sin() #_struct _struct.c # binary structure packing/unpacking time timemodule.c # -lm # time operations and variables operator operator.c # operator.add() and similar goodies #_weakref _weakref.c # basic weak reference support #_testcapi _testcapimodule.c # Python C API test module _random _randommodule.c # Random number generator #collections collectionsmodule.c # Container types itertools itertoolsmodule.c # Functions creating iterators for efficient looping strop stropmodule.c # String manipulations unicodedata unicodedata.c # static Unicode character database ===================================== # Andrew Kuchling's zlib module. # This require zlib 1.1.3 (or later). # See http://www.gzip.org/zlib/ zlib zlibmodule.c -I$(prefix)/include -L$(exec_prefix)/lib -lz ========================================== # Socket module helper for socket(2) _socket socketmodule.c # Socket module helper for SSL support; you must comment out the other # socket line above, and possibly edit the SSL variable: SSL=/usr/local/ssl _ssl _ssl.c \ -DUSE_SSL -I$(SSL)/include -I$(SSL)/include/openssl \ -L$(SSL)/lib -lssl -lcrypto ===================================== THIS IS NOT TO BE REPEATED EVERY TIME * Adding Tkinter support 1. Compile Python's _tkinter.c with the WITH_APPINIT and WITH_PIL flags set, and link it with tkImaging.c and tkappinit.c. To do this, copy the former to the Modules directory, and edit the _tkinter line in Setup (or Setup.in) according to the instructions in that file. You must also change the _tkinter line in Setup (or Setup.in) to something like: _tkinter _tkinter.c tkImaging.c tkappinit.c -DWITH_APPINIT -I/usr/local/include -L/usr/local/lib -ltk8.0 -ltcl8.0 -lX11 =========================================== THIS IS NOT TO BE REPEATED EVERY TIME Copy (don't Move) all Dir/files Imaging-1.1.6/libImaging/*.* to ../Python2.6.2/Modules AND then copy ONLY files from Imaging-1.16/libImaging/*.* to /home/nitin/newpy/Python2.6/Modules/*.* ====================================== Step 6 Build and install Python 2.6.2. Now, there are some important things to discuss here. First and foremost we have given the option ?prefix=/opt/python2.6. This option installs the python binaries and the python library in /opt/python262 (it will make the dir for us) rather than in /usr/local/ which would, as we stated above, replace the standard python interpreter and inherently be bad juju. The /opt directory in redhat based distributions is a directory provides a home for larger, mostly custom built, binaries and applications. Also, we made sure that the interpreter is going to make use of multiple threads by adding the ?with-threads option. I believe that by default, with-threads is true, but better to be safe than sorry. Finally, the ?enable-shared option just allows python to be embedded into other apps: view sourceprint?1.cd 2.tar xfz Python-2.6.2.tgz 3.cd Python-2.6.2 4../configure --prefix=/opt/python2.6 --with-threads --enable-shared 5.make 6.make install Step 7 We need to now make sure that all the users of the system access the new interpreter when python is typed into standard in. To do this, we will need to add a couple of aliases and an addidtion to the $PATH to each users .bash_profile. This file is kept in the home directory of each user (ie: /home/usera/.bash_profile): OR joe ~/.bashrc export PYTHONPATH=/opt/python262/lib/python2.6/site-packages:$PYTHONPATH SAVE and EXIT source ~/.bashrc contd.... view sourceprint?01.su - root 02.cd 03.nano .bash_profile (vi OR joe instead of nano) 04.# add the following lines to the bottom of the file 05.alias python='/opt/python2.6/bin/python' 06.alias python2.6='/opt/python2.6/bin/python' 07.PATH=$PATH:/opt/python2.6/bin 08.# 'ctrl + o' to save the file and 'ctrl+x' to close the file 09.# now do the same for every other user, like this: 10.nano /home/usera/.bash_profile (vi OR joe instead of nano) 11.# add the following lines to the bottom of the file 12.alias python='/opt/python2.6/bin/python' 13.alias python2.6='/opt/python2.6/bin/python' 14.PATH=$PATH:/opt/python2.6/bin 15.# 'ctrl + o' to save the file and 'ctrl+x' to close the file Step 8 Now we need to update BASH so that it knows about the new shared libraries that we have put on the system. Lets create a symlink to them and then reload the cache of the shared libraries: view sourceprint?1.su - root 2.cd 3.cat >> /etc/ld.so.conf.d/opt-python2.6.conf 4.-->/opt/python2.6/lib #hit 'enter' and then 'ctrl+d' 5.ldconfig Step 9 Now lets roll up some setup tools. This will also give us our conduit to the cheese shop (aka: pypi) which I am a complete fan of, despite the nay sayers. Also, we will add some more symlinks : view sourceprint?1.cd 2.wget http://pypi.python.org/packages/2.6/s/setuptools/setuptools-0.6c9-py2.6.egg#md5=ca37b1ff16fa2ede6e19383e7b59245a 3.sh setuptools-0.6c9-py2.6.egg 4.cd /opt/python2.6/lib/python2.6/config 5.ln -s ../../libpython2.6.so . Step 10 Ignore Bridgett Cherry. Step 11 You are done senior! Logout and log back in, to get the new bash profile stuff and continue on your way! =========================================================================================================== INSTALLING FFTW ;;;this is for double precision #./configure --prefix=/opt/fftw332 --enable-shared --enable-threads --enable-sse2 --enable-portable-binary #make #make install RUN THE ./configure 2nd TIME ;;;This is for single precesion #./configure --prefix=/opt/fftw332 --enable-shared --enable-threads --enable-sse --enable-portable-binary \ --enable-float #make >make.log #make install >make.install.log ============================================================================================================ XBLAS.tar.gz INSTALLAITON # tar zxvf xblas.tar.gz # cd xblas-1.0.248 # autoconf # CC=gcc FC=gfortran ./configure --prefix=/opt/xblas # m4 Makefile.m4 >Makefile # make makefiles > makefiles.log # make > make.log To UNINSTALL # make clean ============================================================================================================ LAPACK-LITE-3.1.1 INSTALLATION #tar zxvf lapack-lite-3.1.1.tgz #cd lapack-lite-3.1.1 #cp INSTALL/make.inc.gfortran make.inc IMPORTANT INSTALL ATLAS IN A BOGUS/TEMP DIR, WHICH YOU WILL DELETE AFTER DOING THE FOLLOWING: "BOGUS INSTALL" # tar zxvf atlas3.8.3.tgz # mv ATLAS ATLAS-3.8.3 # mkdir ATLAS_tmp # cd ATLAS_tmp # /home/nitin/newpy/ATLAS-3.8.3/configure -Si cputhrchk 0 -b 32 -D c -DPentiumP4=1790\ -dylib -Fa alg AFTER RUNNING THE ./configure (above), THIS WILL MAKE A make.inc FILE EDIT LAPACK/make.inc : COPY from ATLAS_tmp (./configure creates) make.inc SEARCH FOR THE FOLLOWING LINE AND Copy after = F77FLAGS = -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -m32 PASTE into LAPACK/make.inc OPT= : FORTRAN = gfortran -fimplicit-none -g OPTS = -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -m32 DRVOPTS = $(OPTS) NOOPT = -fomit-frame-pointer -mfpmath=387 -m32 LOADER = gfortran -g LOADOPTS= $(OPTS) { following NOT in lapack-lite ;;; further down Un-Commnet the lines and add the path: ;;; USEXBLAS = Yes ;;; XBLASLIB = /home/nitin/newpy/xblas-1.0.248/libxblas.a ;;; #XBLASLIB = -lxblas } save and exit {{{ following is default #joe Makefile (nano/pico/vi) edit all: lapack_install lib lapack_testing blas_testing save and exit }}} # make blaslib > laplit311_blaslib.log # make > laplit311_make.log # cp lapack_LINUX.a liblapack.a {{{ ;;; did not do this step OR # ld -o /opt/atlas/lib/liblapack.so -shared --whole-archive\ --export-dynamic /home/nitin/newpy/lapack-3.2.1/liblapack.a }}} # rm -Rf ATLAS_tmp/ The following is in Makefile all:lapack_install lib lapack_testing blas_testing ;;; for the time being ;;; removed 'testing' & ;;; 'timing' TO UNINSTALL [ LAPACK ]#rm -vfr lapack_LINUX.a blas_LINUX.a tmglib_LINUX.a lapacklib.a # make clean ========================================================================================== ATLAS INSTALLATIONS INSTRUCTIONS # tar jxvf atlas-3.8.3.tgz # mv ATLAS ATLAS-3.8.3 ;;; Rename the directory, convenient ;;; Turn off CPU throttling when installing ATLAS , Fedora # /usr/bin/cpufreq-selector -g performance ;;; On my Core2Duo, cpufreq-selector only changes the parameters of the first CPU, ;;; regardless of which cpu you specify. I suspect this is a bug, because on earlier ;;; systems, the remaining CPUs were controlled via a logical link to ;;; /sys/devices/system/cpu/cpu0/. In this case, the only way I found to force the ;;; second processor to also run at its peak frequency was to issue the following as ;;; root after setting CPU0 to performance: cp /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor \ /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor # cd .. (one up dir) ;;; Out of ATLAS-3.8.3 dir /home/nitin/newpy # mkdir linux_P4ESSE2 ;;; Make a new dir Or as.... Linux_C2D64SSE3 (Core2Duo) # /home/nitin/newpy/ATLAS-3.8.3/configure --with-netlib-lapack=/home/nitin/newpy/lapack-lite-3.1.1/lapack_LINUX.a\ -dylib -b 32 -D c -DPentiumP4=1790 --prefix=/opt/atlas -Ss flapack /home/nitin/newpy/lapack-lite-3.1.1/SRC\ -Fa alg -fPIC -Si cputhrchk 0 > config.log ;;; takes a good amount of an hour ...frankly depending on your machine config. # make build > at383_build.log # make check > at383_check.log # make > at383_make.log ;;; # make time > time.log # make install > at383_install.log [linux_P4ESSE2]# cd lib # cp /home/nitin/newpy/lapack-lite-3.1.1/liblapack.a . (Overwrite? y) ;; copy current dir ;;; this liblapack.a = lapack_LINUX.a = 13MB approx (at least more than 6MB) # make shared > at311_shared.log # cp -f *.so /opt/atlas/lib/. # cd .. ;;; [linux_P4ESSE2]# cd bin ;;;; # make xdlutst_dyn > at311_xdlutst.log ( export ATLAS=/usr/local/lib/atlas ) UNINSTALL [Linux_P4ESSE2]# make clean ===================================================================================== INSTALLING nose #tar zxvf nose-0.11.1.tar.tar # cd nose-0.11.1 #/opt/python262/bin/python setup.py install --prefix=/opt/python262 2>&1 | tee nose.log ===================================================================================== INSTALLING numpy # tar zxvf numpy-1.3.0rc2.tar.gz # cd numpy-1.3.0rc2 # cp site.cfg.example site.cfg # joe site.cfg [DEFAULT] library_dirs = /usr/local/lib:/opt/atlas/lib:/opt/fftw332/lib:/opt/python262/lib include_dirs = /usr/local/include:/opt/atlas/include:/opt/fftw332/include:/opt/python262/include [blas_opt] libraries = f77blas, cblas, atlas [lapack_opt] libraries = lapack, f77blas, cblas, atlas, g2c [fftw] libraries = fftw3, fftw3f [fftw_opt] libraries = fftw3_threads, fftw3f_threads SAVE and EXIT # /opt/python262/bin/python setup.py -v config_fc build_ext --fcompiler=gnu95 build | tee num130_build.log # /opt/python262/bin/python setup.py install --prefix=/opt/python262 2>&1 | tee num130_install.log # source ~/.bashrc TO UN-INSTALL numpy Remove dir 'build' Remove /opt/python262/lib/python2.6/site-packages/numpy-*.egg and Remove -rvf /opt/python262/lib/python2.6/site-packages/numpy/ ;;; numpy/ direcotry ================================================================================ INSTALLING SciPy /home/nitin/newpy/scipy-0.7.1.tar.gz # tar zxvf scipy-0.7.1.tar.gz # cd scipy-0.7.1 # XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX From dominique.orban at gmail.com Thu Oct 8 16:27:35 2009 From: dominique.orban at gmail.com (dpo) Date: Thu, 8 Oct 2009 13:27:35 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Prevent f2py in add_extension() Message-ID: <25809074.post@talk.nabble.com> Hello, My setup.py scripts use numpy distutils. I notice that whenever the list of source files specified in config.add_extension() contains Fortran files, f2py kicks in and tries to build a wrapper around the Fortran files. How can I prevent this behavior? I have a pre-written extension module in C which relies on a Fortran file, i.e. : src_files = ['file1.c', 'file2.f'] # file1.c is already a wrapper; I don't want file2.f wrapped by f2py config.add_extension( name = 'some_extension', sources = src_files, ) I've tried adding f2py_options=['--no-wrap-functions'] to the argument list, but that doesn't seem to do the trick. The only way I've found so far is to create libraries with all the Fortran files, but that's a bit artificial. Any ideas? Thanks in advance. -- View this message in context: http://www.nabble.com/Prevent-f2py-in-add_extension%28%29-tp25809074p25809074.html Sent from the Scipy-User mailing list archive at Nabble.com. From d.l.goldsmith at gmail.com Thu Oct 8 17:15:15 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 8 Oct 2009 14:15:15 -0700 Subject: [SciPy-User] Simulations In-Reply-To: References: <4ACA44C4.5090003@telenet.be> <4ACB9916.10004@telenet.be> Message-ID: <45d1ab480910081415k43c4b5a3s215b0657750acc69@mail.gmail.com> Probably wise, but then Cesar's comment about being careful about what you advertise applies. DG On Thu, Oct 8, 2009 at 12:34 AM, cool-RR wrote: > Hey Cesar and David, > I thought about this and I think I better stick to the "Do one thing well" > principle for now. Thanks for the insight though. > > Ram. > > > On Tue, Oct 6, 2009 at 9:23 PM, Cesar Koers wrote: > >> Hi Ram, >> >> >> A specific type of time domain solver for electromagnetics is e.g. TDFD >> = time domain finite difference. It is based on discretizing Maxwell >> equations in time & space >> >> But the Maxwell equations can also be expressed in the frequency domain >> (thus for every frequency instead of every time instant). This leads to >> FDFD = frequency domain finite difference. >> >> Other kinds of models, like based on finite elements (FE) can also be >> developed in the frequency domain. >> >> Perhaps you're now thinking that your 'step function' would still work >> in the frequency domain (response at frequency f_{i+} as a function of >> response at frequency f_i), but this doesn't work (to my knowledge) >> because it requires that the system is modeled by differential equations >> in the frequency domain (haven't encountered this before) >> >> >> Ah and some other 'bureaucracy' features: >> * tracking time spent / calculating time remaining till end >> * refining/coarsening time step to improve accuracy/reduce simulation >> time respectively >> >> best regards >> >> C >> >> >> cool-RR wrote: >> > Hey Cesar, >> > >> > Your comments are interesting. >> > >> > Can you explain to me a bit about frequency domain simulations? Can you >> > give an example of a simulation simulating a real world process? >> > >> > I agree that GarlicSim must handle the bureaucracy well, as its job is >> > to let the user write a simulation with as little bureaucracy as >> possible. >> > >> > P.S. I registered garlicsim.org and it is now >> the >> > main domain. >> > >> > Ram. >> > >> > On Mon, Oct 5, 2009 at 9:11 PM, Cesar Koers > > > wrote: >> > >> > Hi Ram, >> > >> > >> > I quickly read through your intro doc, I think you've explained your >> > idea quite well. >> > >> > One remarks though: I think your framework would fit well to >> time-domain >> > (transient) models. But at this moment I don't see how you could >> cast a >> > frequency domain simulation (commonly used in EM solvers) in it. I'd >> be >> > careful with the idea that 'all simulations' fit into this. >> > >> > What I think is key to success of this kind of framework is how well >> it >> > handles the 'bureaucracy' of performing simulations (and speed, but >> > you've already mentioned that the actual number crunching is up to >> the >> > user of the GarlicSim). With this, I mean the boring stuff, like >> e.g.: >> > >> > * keeping track of which parameters vary between simulations >> > * extracting data from a set of simulations as a function of one of >> > these parameters >> > * storing (and backing up) simulation results without taking up too >> much >> > space and needing to invent unique and descriptive file names >> > * being able to redo a simulation (storing simulation parameters >> with >> > results) >> > * making simulation reports >> > * comparing results with real-world data >> > * for long simulations, being able to continue simulation after a >> crash >> > >> > Just my 2 cents >> > >> > Best regards >> > >> > C >> > >> > >> > cool-RR wrote: >> > > Hello, >> > > >> > > This is not directly related to SciPy; I'm posting it here >> because I >> > > figure that there may be people here who know the scientific >> > computing >> > > world enough to help me with my question. >> > > >> > > I've been working on an open-source scientific computing project >> for >> > > about 6 months now, and I've come to the conclusion that it's >> > about time >> > > to find other users except myself for it, so I may get valuable >> > feedback >> > > about which direction I should be taking this project. >> > > >> > > The project is called GarlicSim (http://garlicsim.com >> > > ). It's a Pythonic platform for working >> with >> > > simulations. You may read more about it on the webpage. In short, >> > it's a >> > > very general framework for creating, running and analyzing >> > simulations. >> > > It's not specific to any scientific field; Its role is to provide >> a >> > > general mold into which all simulations can be cast. If you want >> > to know >> > > more about it you can also read a (yet-incomplete) introduction >> > > >> > < >> http://dl.getdropbox.com/u/1927707/Introduction%20to%20GarlicSim.doc> >> > to >> > > it. >> > > >> > > So what I want to know is, who would be good potential first >> > users for >> > > this, and how could I reach them? >> > > I'm not even sure which scientific field I would like to target, >> so >> > > please suggest. >> > > >> > > >> > > Thanks, >> > > Ram Rachum >> > > >> > > >> > > >> > >> ------------------------------------------------------------------------ >> > > >> > > _______________________________________________ >> > > SciPy-User mailing list >> > > SciPy-User at scipy.org >> > > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > -- >> > Gaetan Cesar Koers >> > Kerkveldweg 82 >> > 1851 Humbeek >> > +32(0)486 20 11 16 >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> > >> > >> > -- >> > Sincerely, >> > Ram Rachum >> > >> > >> > ------------------------------------------------------------------------ >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> >> -- >> Gaetan Cesar Koers >> Kerkveldweg 82 >> 1851 Humbeek >> +32(0)486 20 11 16 >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Sincerely, > Ram Rachum > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lev at columbia.edu Thu Oct 8 17:18:03 2009 From: lev at columbia.edu (Lev Givon) Date: Thu, 8 Oct 2009 17:18:03 -0400 Subject: [SciPy-User] ANN: Image Processing SciKit In-Reply-To: <9457e7c80910071526y5208817byd35979c2a8c0c382@mail.gmail.com> References: <9457e7c80909241043h6b94c18q5e4fb8ccc5cdc657@mail.gmail.com> <20091007144239.GB26088@localhost.columbia.edu> <9457e7c80910071526y5208817byd35979c2a8c0c382@mail.gmail.com> Message-ID: <20091008211803.GA427@localhost.columbia.edu> Received from St?fan van der Walt on Wed, Oct 07, 2009 at 06:26:58PM EDT: > 2009/10/7 Lev Givon : > > Would it be possible to create a 0.1 tag so that users know what > > commit "version 0.1" corresponds to? > > Sure, done! > > St?fan Thanks! By the way, I would suggest adding a MANIFEST.in file to the root directory of the project that would include the various *.txt and documentation files in a distribution tarball created with python setup.py sdist. L.G. From cool-rr at cool-rr.com Thu Oct 8 17:26:53 2009 From: cool-rr at cool-rr.com (cool-RR) Date: Thu, 8 Oct 2009 23:26:53 +0200 Subject: [SciPy-User] Simulations In-Reply-To: <45d1ab480910081415k43c4b5a3s215b0657750acc69@mail.gmail.com> References: <4ACA44C4.5090003@telenet.be> <4ACB9916.10004@telenet.be> <45d1ab480910081415k43c4b5a3s215b0657750acc69@mail.gmail.com> Message-ID: How would you phrase it then, instead of "all simulations"? I wouldn't want to deceive people. On Thu, Oct 8, 2009 at 11:15 PM, David Goldsmith wrote: > Probably wise, but then Cesar's comment about being careful about what you > advertise applies. > > DG > > > On Thu, Oct 8, 2009 at 12:34 AM, cool-RR wrote: > >> Hey Cesar and David, >> I thought about this and I think I better stick to the "Do one thing well" >> principle for now. Thanks for the insight though. >> >> Ram. >> >> >> On Tue, Oct 6, 2009 at 9:23 PM, Cesar Koers wrote: >> >>> Hi Ram, >>> >>> >>> A specific type of time domain solver for electromagnetics is e.g. TDFD >>> = time domain finite difference. It is based on discretizing Maxwell >>> equations in time & space >>> >>> But the Maxwell equations can also be expressed in the frequency domain >>> (thus for every frequency instead of every time instant). This leads to >>> FDFD = frequency domain finite difference. >>> >>> Other kinds of models, like based on finite elements (FE) can also be >>> developed in the frequency domain. >>> >>> Perhaps you're now thinking that your 'step function' would still work >>> in the frequency domain (response at frequency f_{i+} as a function of >>> response at frequency f_i), but this doesn't work (to my knowledge) >>> because it requires that the system is modeled by differential equations >>> in the frequency domain (haven't encountered this before) >>> >>> >>> Ah and some other 'bureaucracy' features: >>> * tracking time spent / calculating time remaining till end >>> * refining/coarsening time step to improve accuracy/reduce simulation >>> time respectively >>> >>> best regards >>> >>> C >>> >>> >>> cool-RR wrote: >>> > Hey Cesar, >>> > >>> > Your comments are interesting. >>> > >>> > Can you explain to me a bit about frequency domain simulations? Can you >>> > give an example of a simulation simulating a real world process? >>> > >>> > I agree that GarlicSim must handle the bureaucracy well, as its job is >>> > to let the user write a simulation with as little bureaucracy as >>> possible. >>> > >>> > P.S. I registered garlicsim.org and it is now >>> the >>> > main domain. >>> > >>> > Ram. >>> > >>> > On Mon, Oct 5, 2009 at 9:11 PM, Cesar Koers >> > > wrote: >>> > >>> > Hi Ram, >>> > >>> > >>> > I quickly read through your intro doc, I think you've explained >>> your >>> > idea quite well. >>> > >>> > One remarks though: I think your framework would fit well to >>> time-domain >>> > (transient) models. But at this moment I don't see how you could >>> cast a >>> > frequency domain simulation (commonly used in EM solvers) in it. >>> I'd be >>> > careful with the idea that 'all simulations' fit into this. >>> > >>> > What I think is key to success of this kind of framework is how >>> well it >>> > handles the 'bureaucracy' of performing simulations (and speed, but >>> > you've already mentioned that the actual number crunching is up to >>> the >>> > user of the GarlicSim). With this, I mean the boring stuff, like >>> e.g.: >>> > >>> > * keeping track of which parameters vary between simulations >>> > * extracting data from a set of simulations as a function of one of >>> > these parameters >>> > * storing (and backing up) simulation results without taking up too >>> much >>> > space and needing to invent unique and descriptive file names >>> > * being able to redo a simulation (storing simulation parameters >>> with >>> > results) >>> > * making simulation reports >>> > * comparing results with real-world data >>> > * for long simulations, being able to continue simulation after a >>> crash >>> > >>> > Just my 2 cents >>> > >>> > Best regards >>> > >>> > C >>> > >>> > >>> > cool-RR wrote: >>> > > Hello, >>> > > >>> > > This is not directly related to SciPy; I'm posting it here >>> because I >>> > > figure that there may be people here who know the scientific >>> > computing >>> > > world enough to help me with my question. >>> > > >>> > > I've been working on an open-source scientific computing project >>> for >>> > > about 6 months now, and I've come to the conclusion that it's >>> > about time >>> > > to find other users except myself for it, so I may get valuable >>> > feedback >>> > > about which direction I should be taking this project. >>> > > >>> > > The project is called GarlicSim (http://garlicsim.com >>> > > ). It's a Pythonic platform for working >>> with >>> > > simulations. You may read more about it on the webpage. In >>> short, >>> > it's a >>> > > very general framework for creating, running and analyzing >>> > simulations. >>> > > It's not specific to any scientific field; Its role is to >>> provide a >>> > > general mold into which all simulations can be cast. If you want >>> > to know >>> > > more about it you can also read a (yet-incomplete) introduction >>> > > >>> > < >>> http://dl.getdropbox.com/u/1927707/Introduction%20to%20GarlicSim.doc> >>> > to >>> > > it. >>> > > >>> > > So what I want to know is, who would be good potential first >>> > users for >>> > > this, and how could I reach them? >>> > > I'm not even sure which scientific field I would like to target, >>> so >>> > > please suggest. >>> > > >>> > > >>> > > Thanks, >>> > > Ram Rachum >>> > > >>> > > >>> > > >>> > >>> ------------------------------------------------------------------------ >>> > > >>> > > _______________________________________________ >>> > > SciPy-User mailing list >>> > > SciPy-User at scipy.org >>> > > http://mail.scipy.org/mailman/listinfo/scipy-user >>> > >>> > -- >>> > Gaetan Cesar Koers >>> > Kerkveldweg 82 >>> > 1851 Humbeek >>> > +32(0)486 20 11 16 >>> > _______________________________________________ >>> > SciPy-User mailing list >>> > SciPy-User at scipy.org >>> > http://mail.scipy.org/mailman/listinfo/scipy-user >>> > >>> > >>> > >>> > >>> > -- >>> > Sincerely, >>> > Ram Rachum >>> > >>> > >>> > >>> ------------------------------------------------------------------------ >>> > >>> > _______________________________________________ >>> > SciPy-User mailing list >>> > SciPy-User at scipy.org >>> > http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> -- >>> Gaetan Cesar Koers >>> Kerkveldweg 82 >>> 1851 Humbeek >>> +32(0)486 20 11 16 >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> >> >> >> -- >> Sincerely, >> Ram Rachum >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Sincerely, Ram Rachum -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Thu Oct 8 18:51:13 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 8 Oct 2009 15:51:13 -0700 Subject: [SciPy-User] Simulations In-Reply-To: References: <4ACA44C4.5090003@telenet.be> <4ACB9916.10004@telenet.be> <45d1ab480910081415k43c4b5a3s215b0657750acc69@mail.gmail.com> Message-ID: <45d1ab480910081551vc8611fcjd283b36322a41894@mail.gmail.com> A little problematic as no wording is guaranteed to be transparent to all users (different disciplines have different jargon for essentially the same thing) but possibilities include "time-domain-like" (probably the preference of engineers and many other breeds of scientist) or "discretely-incremented" (perhaps the most general) or "discrete dynamical system" (my preference as a mathematician). DG On Thu, Oct 8, 2009 at 2:26 PM, cool-RR wrote: > How would you phrase it then, instead of "all simulations"? I wouldn't want > to deceive people. > > On Thu, Oct 8, 2009 at 11:15 PM, David Goldsmith wrote: > >> Probably wise, but then Cesar's comment about being careful about what you >> advertise applies. >> >> DG >> >> >> On Thu, Oct 8, 2009 at 12:34 AM, cool-RR wrote: >> >>> Hey Cesar and David, >>> I thought about this and I think I better stick to the "Do one thing >>> well" principle for now. Thanks for the insight though. >>> >>> Ram. >>> >>> >>> On Tue, Oct 6, 2009 at 9:23 PM, Cesar Koers wrote: >>> >>>> Hi Ram, >>>> >>>> >>>> A specific type of time domain solver for electromagnetics is e.g. TDFD >>>> = time domain finite difference. It is based on discretizing Maxwell >>>> equations in time & space >>>> >>>> But the Maxwell equations can also be expressed in the frequency domain >>>> (thus for every frequency instead of every time instant). This leads to >>>> FDFD = frequency domain finite difference. >>>> >>>> Other kinds of models, like based on finite elements (FE) can also be >>>> developed in the frequency domain. >>>> >>>> Perhaps you're now thinking that your 'step function' would still work >>>> in the frequency domain (response at frequency f_{i+} as a function of >>>> response at frequency f_i), but this doesn't work (to my knowledge) >>>> because it requires that the system is modeled by differential equations >>>> in the frequency domain (haven't encountered this before) >>>> >>>> >>>> Ah and some other 'bureaucracy' features: >>>> * tracking time spent / calculating time remaining till end >>>> * refining/coarsening time step to improve accuracy/reduce simulation >>>> time respectively >>>> >>>> best regards >>>> >>>> C >>>> >>>> >>>> cool-RR wrote: >>>> > Hey Cesar, >>>> > >>>> > Your comments are interesting. >>>> > >>>> > Can you explain to me a bit about frequency domain simulations? Can >>>> you >>>> > give an example of a simulation simulating a real world process? >>>> > >>>> > I agree that GarlicSim must handle the bureaucracy well, as its job is >>>> > to let the user write a simulation with as little bureaucracy as >>>> possible. >>>> > >>>> > P.S. I registered garlicsim.org and it is now >>>> the >>>> > main domain. >>>> > >>>> > Ram. >>>> > >>>> > On Mon, Oct 5, 2009 at 9:11 PM, Cesar Koers >>> > > wrote: >>>> > >>>> > Hi Ram, >>>> > >>>> > >>>> > I quickly read through your intro doc, I think you've explained >>>> your >>>> > idea quite well. >>>> > >>>> > One remarks though: I think your framework would fit well to >>>> time-domain >>>> > (transient) models. But at this moment I don't see how you could >>>> cast a >>>> > frequency domain simulation (commonly used in EM solvers) in it. >>>> I'd be >>>> > careful with the idea that 'all simulations' fit into this. >>>> > >>>> > What I think is key to success of this kind of framework is how >>>> well it >>>> > handles the 'bureaucracy' of performing simulations (and speed, >>>> but >>>> > you've already mentioned that the actual number crunching is up to >>>> the >>>> > user of the GarlicSim). With this, I mean the boring stuff, like >>>> e.g.: >>>> > >>>> > * keeping track of which parameters vary between simulations >>>> > * extracting data from a set of simulations as a function of one >>>> of >>>> > these parameters >>>> > * storing (and backing up) simulation results without taking up >>>> too much >>>> > space and needing to invent unique and descriptive file names >>>> > * being able to redo a simulation (storing simulation parameters >>>> with >>>> > results) >>>> > * making simulation reports >>>> > * comparing results with real-world data >>>> > * for long simulations, being able to continue simulation after a >>>> crash >>>> > >>>> > Just my 2 cents >>>> > >>>> > Best regards >>>> > >>>> > C >>>> > >>>> > >>>> > cool-RR wrote: >>>> > > Hello, >>>> > > >>>> > > This is not directly related to SciPy; I'm posting it here >>>> because I >>>> > > figure that there may be people here who know the scientific >>>> > computing >>>> > > world enough to help me with my question. >>>> > > >>>> > > I've been working on an open-source scientific computing >>>> project for >>>> > > about 6 months now, and I've come to the conclusion that it's >>>> > about time >>>> > > to find other users except myself for it, so I may get valuable >>>> > feedback >>>> > > about which direction I should be taking this project. >>>> > > >>>> > > The project is called GarlicSim (http://garlicsim.com >>>> > > ). It's a Pythonic platform for working >>>> with >>>> > > simulations. You may read more about it on the webpage. In >>>> short, >>>> > it's a >>>> > > very general framework for creating, running and analyzing >>>> > simulations. >>>> > > It's not specific to any scientific field; Its role is to >>>> provide a >>>> > > general mold into which all simulations can be cast. If you >>>> want >>>> > to know >>>> > > more about it you can also read a (yet-incomplete) introduction >>>> > > >>>> > < >>>> http://dl.getdropbox.com/u/1927707/Introduction%20to%20GarlicSim.doc> >>>> > to >>>> > > it. >>>> > > >>>> > > So what I want to know is, who would be good potential first >>>> > users for >>>> > > this, and how could I reach them? >>>> > > I'm not even sure which scientific field I would like to >>>> target, so >>>> > > please suggest. >>>> > > >>>> > > >>>> > > Thanks, >>>> > > Ram Rachum >>>> > > >>>> > > >>>> > > >>>> > >>>> ------------------------------------------------------------------------ >>>> > > >>>> > > _______________________________________________ >>>> > > SciPy-User mailing list >>>> > > SciPy-User at scipy.org >>>> > > http://mail.scipy.org/mailman/listinfo/scipy-user >>>> > >>>> > -- >>>> > Gaetan Cesar Koers >>>> > Kerkveldweg 82 >>>> > 1851 Humbeek >>>> > +32(0)486 20 11 16 >>>> > _______________________________________________ >>>> > SciPy-User mailing list >>>> > SciPy-User at scipy.org >>>> > http://mail.scipy.org/mailman/listinfo/scipy-user >>>> > >>>> > >>>> > >>>> > >>>> > -- >>>> > Sincerely, >>>> > Ram Rachum >>>> > >>>> > >>>> > >>>> ------------------------------------------------------------------------ >>>> > >>>> > _______________________________________________ >>>> > SciPy-User mailing list >>>> > SciPy-User at scipy.org >>>> > http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> -- >>>> Gaetan Cesar Koers >>>> Kerkveldweg 82 >>>> 1851 Humbeek >>>> +32(0)486 20 11 16 >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>> >>> >>> >>> -- >>> Sincerely, >>> Ram Rachum >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > > -- > Sincerely, > Ram Rachum > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pearu.peterson at gmail.com Fri Oct 9 02:41:37 2009 From: pearu.peterson at gmail.com (Pearu Peterson) Date: Fri, 09 Oct 2009 09:41:37 +0300 Subject: [SciPy-User] [SciPy-user] Prevent f2py in add_extension() In-Reply-To: <25809074.post@talk.nabble.com> References: <25809074.post@talk.nabble.com> Message-ID: <4ACEDB21.4060606@cens.ioc.ee> dpo wrote: > Hello, > > My setup.py scripts use numpy distutils. I notice that whenever the list of > source files specified in config.add_extension() contains Fortran files, > f2py kicks in and tries to build a wrapper around the Fortran files. How can > I prevent this behavior? I have a pre-written extension module in C which > relies on a Fortran file, i.e. : > > src_files = ['file1.c', 'file2.f'] # file1.c is already a wrapper; I don't > want file2.f wrapped by f2py > > config.add_extension( > name = 'some_extension', > sources = src_files, > ) Add fortran files to a library and specify the library in the extension: config.add_library(name='some_library', sources=['file2.f']) config.add_extension(name='some_extension', sources=['file1.c'], libraries=['some_library']) HTH, Pearu From elcorto at gmx.net Fri Oct 9 03:46:03 2009 From: elcorto at gmx.net (Steve Schmerler) Date: Fri, 9 Oct 2009 09:46:03 +0200 Subject: [SciPy-User] scipy.optimize fmin error In-Reply-To: <6ec71d090910061124k70bb61d0n29cc74cf6c35de0b@mail.gmail.com> References: <6ec71d090910061124k70bb61d0n29cc74cf6c35de0b@mail.gmail.com> Message-ID: <20091009074603.GA3948@ramrod.starsheriffs.de> On Oct 06 20:24 +0200, Oz Nahum wrote: [...] > t = array([1.0,300.0,600.0, 900., 1200., 1260., 1320., 1380, \ > 1440, 1500, 1560, 1620, 1680, 1740, 1800, 1860, > 1920, 1980, 2040, 2100, 2160, 2220, 2280, 2340,\ > 2400, 2460, 2520, 2580, 2640, 2700, 2760, 2820,\ > 2880, 2940, 3000, 3060, 3120, 3180, 3240, 3300,\ > 3360, 3420, 3480, 3540, 3600, 3660, 3720, 3780,\ > 3840, 3900, 4200, 4500, 4800, 5100, 5400, 5700,\ > 6000, 6300, 6600, 6900, 7200, 7500, 7800, 8100,\ > 8400, 8700, 9000, 9300, 9600, 9900, 10200, 10500,\ > 10800, 11100, 11400, 12000]) > > t = t.transpose() > > c = array([0.07, 0.1, 0.11, 0.13, 1.17, 2.15, 3.65, 5.64,\ > 8.12, 11, 14.3, 17.3, 20.6, 23.5, 26.5, 29.1,\ > 31.5, 33.5, 35.3, 36.8, 37.9, 38.8, 39.5, 39.8,\ > 40.1, 40.2, 40.1, 39.9, 39.5, 39, 38.5, 37.9, \ > 37.3, 36.5, 35.9, 35.1, 34.4, 33.5, 32.9, 32, \ > 31.2, 30.5, 29.9, 29, 28.2, 27.5, 26.8, 26.1, \ > 25.4, 24.7, 21.7, 19, 16.8, 14.8, 13.3, 12.1, \ > 11, 10.1, 9.4, 8.81, 8.15, 7.71, 7.3, 6.98, \ > 6.67, 6.36, 6.12, 5.92, 5.78, 5.58, 5.41, 5.15, \ > 4.77, 4.54, 4.37, 4.19])-0.07*1e-9*1350 > > c=c.transpose() [...] Some further small comments. t.transpose() (or t.T) is not useful for rank 1 arrays. The shape of `t` and `c` does not change. # rank 1 array, shape (N,) In [16]: t=array([1,2,3]) In [17]: t.shape Out[17]: (3,) # transpose, shape (N,) In [18]: t.T.shape Out[18]: (3,) # "row vector" in Matlab = 1xN matrix, shape (1,N) In [19]: trow=array([[1,2,3]]) In [20]: trow.shape Out[20]: (1, 3) # column vector, shape (N,1) In [21]: trow.T Out[21]: array([[1], [2], [3]]) In [22]: trow.T.shape Out[22]: (3, 1) See also http://scipy.org/NumPy_for_Matlab_Users. best, Steve From brennan.williams at visualreservoir.com Fri Oct 9 04:53:34 2009 From: brennan.williams at visualreservoir.com (Brennan Williams) Date: Fri, 09 Oct 2009 21:53:34 +1300 Subject: [SciPy-User] Response surface methodology In-Reply-To: References: Message-ID: <4ACEFA0E.6050402@visualreservoir.com> I don't now of anything generic but I am interested in this. I'm looking at using the linalg module to create a polynomial, then using Monte Carlo simulation to create a response surface and visualize this in either (or both) Chaco and Mayavi. giorgio.luciano at inwind.it wrote: > Does anyone has some python code (or eventually R code that can be imported in python) related to Response Surface Methodology ? I've tried to dig around but nothing seems available > > http://en.wikipedia.org/wiki/Response_surface_methodology > > Thanks in advance for any suggestions > > Giorgio > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From denis-bz-gg at t-online.de Fri Oct 9 09:46:13 2009 From: denis-bz-gg at t-online.de (denis) Date: Fri, 9 Oct 2009 06:46:13 -0700 (PDT) Subject: [SciPy-User] a small example of scipy.ndimage.map_coordinates In-Reply-To: <49d6b3500910071232k5ff25d19mdc100dc54aadb709@mail.gmail.com> References: <74bd6125-954d-4ae3-bef3-791a81b0fb67@y21g2000yqn.googlegroups.com> <1254943717.4757.19.camel@idol> <49d6b3500910071232k5ff25d19mdc100dc54aadb709@mail.gmail.com> Message-ID: <66fc52dd-4c61-4bcb-8902-1a566aee5b34@x37g2000yqj.googlegroups.com> Folks, On pydocweb: thanks, but Django ?! Despite the nice pydocweb/doc/installation.rst, what are the chances that a dummy (me, old guy raised on paper doc with no install) could install Django, get through pages of setup, and live to tell the tale ? I just prefer minimal packages to big ones -- Django, Enthought -- with an avalanche of requires. rest-to-html > my.html, refresh my.html in the browser is not realtime, but. On Markdown: although I (old guy) dislike most GUIs, I've gotton to like the Markdown editor in mechanicalkern / stackoverflow. Once you get used to realtime, you want it; and the half-dozen buttons for \ etc are good. Look at a stackoverflow discussion vs one in gmail -- form does affect function. It's still not clear to me how mechanicalkern -> mainline scipy doc will work, perhaps a new thread on that in scipy-dev ? cheers -- denis From dominique.orban at gmail.com Fri Oct 9 09:56:07 2009 From: dominique.orban at gmail.com (dpo) Date: Fri, 9 Oct 2009 06:56:07 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Prevent f2py in add_extension() In-Reply-To: <4ACEDB21.4060606@cens.ioc.ee> References: <25809074.post@talk.nabble.com> <4ACEDB21.4060606@cens.ioc.ee> Message-ID: <25821633.post@talk.nabble.com> Pearu Peterson-2 wrote: > > > > dpo wrote: >> Hello, >> >> My setup.py scripts use numpy distutils. I notice that whenever the list >> of >> source files specified in config.add_extension() contains Fortran files, >> f2py kicks in and tries to build a wrapper around the Fortran files. How >> can >> I prevent this behavior? I have a pre-written extension module in C which >> relies on a Fortran file, i.e. : >> >> src_files = ['file1.c', 'file2.f'] # file1.c is already a wrapper; I >> don't >> want file2.f wrapped by f2py >> >> config.add_extension( >> name = 'some_extension', >> sources = src_files, >> ) > > Add fortran files to a library and specify the library in the extension: > > config.add_library(name='some_library', sources=['file2.f']) > config.add_extension(name='some_extension', sources=['file1.c'], > libraries=['some_library']) > > HTH, > Pearu > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > OK thanks. That's what I've been doing so far. D. -- View this message in context: http://www.nabble.com/Prevent-f2py-in-add_extension%28%29-tp25809074p25821633.html Sent from the Scipy-User mailing list archive at Nabble.com. From jsseabold at gmail.com Fri Oct 9 10:21:12 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Fri, 9 Oct 2009 10:21:12 -0400 Subject: [SciPy-User] Suggestion for numpy.genfromtxt documentation In-Reply-To: <4ACCE9F2.4090002@gmail.com> References: <4ACC6CA1.63BA.009B.0@twdb.state.tx.us> <4ACCE9F2.4090002@gmail.com> Message-ID: On Wed, Oct 7, 2009 at 3:20 PM, Bruce Southey wrote: > On 10/07/2009 10:52 AM, Skipper Seabold wrote: >> On Wed, Oct 7, 2009 at 11:25 AM, Dharhas Pothina >> ?wrote: >> >>> Hi, >>> >>> It took me a while and a lot of trial and error to work out why this didn't work as expected. >>> >>> data = np.genfromtxt(fname,usecols=(2,3,4),names='x,y,z') >>> >>> this command works and does not return any warnings or errors, but returns an numpy array with no field names. If you use: >>> >>> data = np.genfromtxt(fname,usecols=(2,3,4),dtype=None,names='x,y,z') >>> >>> then the command does what I expect it to and returns a structured numpy array with field names. So essentially, the 'names' argument doesn't not work unless you also specify the 'dtype' argument. >>> > What did you actually expect? > It would be very informative if you could provide a simple example of > this for testing. > > There are many combinations of arguments so not all have been tested and > it is not always clear what the expected behavior should be. > >>> I think, it would be less confusing to new users to either have this explicitly mentioned in the documentation string for the genfromtxt 'names' argument or to have the function default to 'dtype=None' ?if the 'names' argument is specified without specifying the 'dtype' argument. >>> >>> - dharhas >>> >> I came across this behavior recently and agree with you. ?There is a >> patch in the works for this. >> >> See this thread: http://thread.gmane.org/gmane.comp.python.numeric.general/33479 >> >> And this ticket: http://projects.scipy.org/numpy/ticket/1252 >> >> Cheers, >> >> Skipper >> > > ?From the numpy help, there is this example: > data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'), > ('mystring','S5')], delimiter=",") > > It does not help that the dtype of structured arrays also includes the > actual name. So I do not think we can use dtype argument without using > the combination of dtype and name. Perhaps if dtype is split into names > and formats so that dtype=('name', 'format'). > > In some sense you are suggesting that we should have something like: > With the defaultfmt keyword added and the new changes here is the current state of things. from StringIO import StringIO import numpy as np s = StringIO("1,2,3.0") > Ignore the use of None and True for dtype and names arguments: > i) If only dtype is only specified then use the specified dtype and add > default names such as col1, col2,... if necessary > This gives a plain array, so no default names are used. data = np.genfromtxt(s, delimiter=",") # dtype=float In [54]: data Out[54]: array([ 1., 2., 3.]) If default names are specified then it doesn't seem to pick them up as of right now. s.seek(0) data = np.genfromtxt(s, delimiter=",", defaultfmt="Var%i") In [79]: data Out[79]: array([ 1., 2., 3.]) > ii) If names is only specified then contruct the dtype as ('name', > 'default format') s.seek(0) data = np.genfromtxt(s, delimiter=",", names=['var1','var2','var3']) #dtype = float In [57]: data Out[57]: array((1.0, 2.0, 3.0), dtype=[('var1', ' iii) If formats is only specified then construct the dtype as ('default > name', 'format') This doesn't seem to work with the new easy dtype as noted above. But this does data = np.genfromtxt(s, delimiter=",", dtype=(int,int,float), defaultfmt="var%i") In [72]: data Out[72]: array((1, 2, 3.0), dtype=[('var0', ' iv) If only names and formats are only specified then construct the > dtype as ('name', 'format') > So I think this means, s.seek(0) data = np.genfromtxt(s, delimiter=",", dtype=(int,int,float), names=['var1','var2','var3']) In [86]: data Out[86]: array((1, 2, 3.0), dtype=[('var1', ' v) If no dtype, names and formats are only specified then construct the > dtype as ('default name', 'default format') > Same case as above I think where s.seek(0) data = np.genfromtxt(s, delimiter=",", defaultfmt="var%i") doesn't work as "expected" to zip float (the default format) with the default name, specified by defaultfmt. > vi) If dtype and names or formats are specified then use dtype if it is > of the form ('name', 'format') or use one of the previous cases. > This seems to be the case for defaultfmt, s.seek(0) data = np.genfromtxt(s, dtype=[('var1',int),('var2',int),('var3',float)], delimiter=",", defaultfmt="VAR%i") In [99]: data Out[99]: array((1, 2, 3.0), dtype=[('var1', ' When dtype is None this implies format is None so the format is obtained > from the data. If names is not True then the names are either from the > argument or default values. > Well, genfromtxt returns plain arrays too, so if Names is not True or an argument, then we can't give default values. I think defaultfmt should have a True argument as well, that way you can return a structured array with f0, f1, f2 as the names if that's what you want. > If names argument is True then the names should be read from the data > and one of the previous cases apply. > It's a bit confusing to think of data type "formats" and have the defaultfmt, perhaps it should be defaultnm? So in sum, I think we should maybe have a True argument for defaultfmt, maybe change the name to defaultnm to avoid confusion, and have it so the easy dtype construction works with defaultfmt. I will comment on the open tickets. Anything I missed? Skipper From bsouthey at gmail.com Fri Oct 9 11:21:06 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Fri, 09 Oct 2009 10:21:06 -0500 Subject: [SciPy-User] Suggestion for numpy.genfromtxt documentation In-Reply-To: References: <4ACC6CA1.63BA.009B.0@twdb.state.tx.us> <4ACCE9F2.4090002@gmail.com> Message-ID: <4ACF54E2.10409@gmail.com> On 10/09/2009 09:21 AM, Skipper Seabold wrote: > On Wed, Oct 7, 2009 at 3:20 PM, Bruce Southey wrote: > >> On 10/07/2009 10:52 AM, Skipper Seabold wrote: >> >>> On Wed, Oct 7, 2009 at 11:25 AM, Dharhas Pothina >>> wrote: >>> >>> >>>> Hi, >>>> >>>> It took me a while and a lot of trial and error to work out why this didn't work as expected. >>>> >>>> data = np.genfromtxt(fname,usecols=(2,3,4),names='x,y,z') >>>> >>>> this command works and does not return any warnings or errors, but returns an numpy array with no field names. If you use: >>>> >>>> data = np.genfromtxt(fname,usecols=(2,3,4),dtype=None,names='x,y,z') >>>> >>>> then the command does what I expect it to and returns a structured numpy array with field names. So essentially, the 'names' argument doesn't not work unless you also specify the 'dtype' argument. >>>> >>>> >> What did you actually expect? >> It would be very informative if you could provide a simple example of >> this for testing. >> >> There are many combinations of arguments so not all have been tested and >> it is not always clear what the expected behavior should be. >> >> >>>> I think, it would be less confusing to new users to either have this explicitly mentioned in the documentation string for the genfromtxt 'names' argument or to have the function default to 'dtype=None' if the 'names' argument is specified without specifying the 'dtype' argument. >>>> >>>> - dharhas >>>> >>>> >>> I came across this behavior recently and agree with you. There is a >>> patch in the works for this. >>> >>> See this thread: http://thread.gmane.org/gmane.comp.python.numeric.general/33479 >>> >>> And this ticket: http://projects.scipy.org/numpy/ticket/1252 >>> >>> Cheers, >>> >>> Skipper >>> >>> >> From the numpy help, there is this example: >> data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'), >> ('mystring','S5')], delimiter=",") >> >> It does not help that the dtype of structured arrays also includes the >> actual name. So I do not think we can use dtype argument without using >> the combination of dtype and name. Perhaps if dtype is split into names >> and formats so that dtype=('name', 'format'). >> >> In some sense you are suggesting that we should have something like: >> >> > With the defaultfmt keyword added and the new changes here is the > current state of things. > Which version is that? (Okay I update from SVN but not tried to build it due to the recent issues about import) > from StringIO import StringIO > import numpy as np > > s = StringIO("1,2,3.0") > > >> Ignore the use of None and True for dtype and names arguments: >> i) If only dtype is only specified then use the specified dtype and add >> default names such as col1, col2,... if necessary >> >> > This gives a plain array, so no default names are used. > > data = np.genfromtxt(s, delimiter=",") # dtype=float > > In [54]: data > Out[54]: array([ 1., 2., 3.]) > Rats, I forgot about plain arrays. But this is a bug because the default argument is defaultfmt="f%i". But I this option is kept then I think the default argument of defaultfmt should be None. > If default names are specified then it doesn't seem to pick them up as > of right now. > > s.seek(0) > data = np.genfromtxt(s, delimiter=",", defaultfmt="Var%i") > > In [79]: data > Out[79]: array([ 1., 2., 3.]) > This is also a bug. > > >> ii) If names is only specified then contruct the dtype as ('name', >> 'default format') >> > s.seek(0) > data = np.genfromtxt(s, delimiter=",", names=['var1','var2','var3']) > #dtype = float > > In [57]: data > Out[57]: > array((1.0, 2.0, 3.0), > dtype=[('var1', ' > Excellent as what I expected. >> iii) If formats is only specified then construct the dtype as ('default >> name', 'format') >> > This doesn't seem to work with the new easy dtype as noted above. > > But this does > > data = np.genfromtxt(s, delimiter=",", dtype=(int,int,float), > defaultfmt="var%i") > > In [72]: data > Out[72]: > array((1, 2, 3.0), > dtype=[('var0', ' > I forgot that a plain array could be desired. >> iv) If only names and formats are only specified then construct the >> dtype as ('name', 'format') >> >> > So I think this means, > > s.seek(0) > data = np.genfromtxt(s, delimiter=",", dtype=(int,int,float), > names=['var1','var2','var3']) > > In [86]: data > Out[86]: > array((1, 2, 3.0), > dtype=[('var1', ' > Yes that is what I meant. > >> v) If no dtype, names and formats are only specified then construct the >> dtype as ('default name', 'default format') >> >> > Same case as above I think where > > s.seek(0) > data = np.genfromtxt(s, delimiter=",", defaultfmt="var%i") > > doesn't work as "expected" to zip float (the default format) with the > default name, specified by defaultfmt. > Again I did forget about having a plain array which would be the case here. > >> vi) If dtype and names or formats are specified then use dtype if it is >> of the form ('name', 'format') or use one of the previous cases. >> >> > This seems to be the case for defaultfmt, > > s.seek(0) > data = np.genfromtxt(s, > dtype=[('var1',int),('var2',int),('var3',float)], delimiter=",", > defaultfmt="VAR%i") > > In [99]: data > Out[99]: > array((1, 2, 3.0), > dtype=[('var1', ' > But if names is specified, then it's never ignored > > s.seek(0) > data = np.genfromtxt(s, > dtype=[('var1',int),('var2',int),('var3',float)], delimiter=",", > names=['VAR1','VAR2','VAR3']) > > In [102]: data > Out[102]: > array((1, 2, 3.0), > dtype=[('VAR1', ' > Here the problem is which user input overrides the other. As long as it is clearly documented what happens then I do not care (I care when things are not stated). >> When dtype is None this implies format is None so the format is obtained >> from the data. If names is not True then the names are either from the >> argument or default values. >> >> > Well, genfromtxt returns plain arrays too, so if Names is not True or > an argument, then we can't give default values. I think defaultfmt > should have a True argument as well, that way you can return a > structured array with f0, f1, f2 as the names if that's what you want. > Yes, I forgot about the plain array case or having no named fields. But I think that this could still be handled by the names argument. So that if a user does not specify any name (name=None) and no dtype (or all columns have the same dtype) then we have to return a plain array. I presume because I have not tested it, that names='var%i' should work. So that we could have 'names=False' that would be that same as names='var%i'. Also, I think that a structured array results from when different dtypes are specified so that should automatically have the same effect as names='var%i'. >> If names argument is True then the names should be read from the data >> and one of the previous cases apply. >> >> > It's a bit confusing to think of data type "formats" and have the > defaultfmt, perhaps it should be defaultnm? > I agree. With formats, I expect things like different character and numeric types. If we can add this to the names argument then we should not need it. > So in sum, I think we should maybe have a True argument for > defaultfmt, maybe change the name to defaultnm to avoid confusion, and > have it so the easy dtype construction works with defaultfmt. I will > comment on the open tickets. > > Anything I missed? > > Skipper > Excellent job but you missed the case of the names supplied in the header. Bruce From nitinchandra1 at gmail.com Fri Oct 9 12:24:34 2009 From: nitinchandra1 at gmail.com (nitin chandra) Date: Fri, 9 Oct 2009 21:54:34 +0530 Subject: [SciPy-User] NumPy Message-ID: <965122bf0910090924p4dd19c7eua61506830348c4e2@mail.gmail.com> Hello Everyone, Agile Aspect As requested please find attached 'site.cfg' from /opt/python262/lib/site-package/numpy/distutils/ Thanks Nitin -------------- next part -------------- A non-text attachment was scrubbed... Name: site.cfg Type: application/octet-stream Size: 5322 bytes Desc: not available URL: From dominique.orban at gmail.com Fri Oct 9 14:10:39 2009 From: dominique.orban at gmail.com (dpo) Date: Fri, 9 Oct 2009 11:10:39 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Custom sections in site.cfg In-Reply-To: <25788619.post@talk.nabble.com> References: <25788619.post@talk.nabble.com> Message-ID: <25825593.post@talk.nabble.com> dpo wrote: > > Hi all, > > Is it possible / easy to add custom sections to a site.cfg for a project > that relies upon Numpy? I need BLAS, LAPACK etc., and Numpy distutils lets > me grab those conveniently from site.cfg but I'd like to also add a few > extra sections. > > Thanks for any pointer, suggestion, or example! > For anybody who might be interested, my temporary solution consists in having each setup.py script read the config file using a ConfigParser instance. The downside is that the file is read many times; I am not sure how to read it once and make all config info visible to other setup.py scripts. Each 'configuration' function has bits like the following: def configuration(parent_package='',top_path=None): import ConfigParser from numpy.distutils.misc_util import Configuration from numpy.distutils.system_info import get_info # Read our custom configuration options. custom_config = ConfigParser.SafeConfigParser() custom_config.read(os.path.join(top_path, 'site.cfg')) custom_option = custom_config.get('CUSTOMSECTION', 'custom_option') config = Configuration('mypkg', parent_package, top_path) # Get info from site.cfg using the Numpy distutils infrastructure. blas_info = get_info('blas_opt',0) if not blas_info: print 'No blas info found' ... -- View this message in context: http://www.nabble.com/Custom-sections-in-site.cfg-tp25788619p25825593.html Sent from the Scipy-User mailing list archive at Nabble.com. From pgmdevlist at gmail.com Fri Oct 9 14:27:37 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 9 Oct 2009 14:27:37 -0400 Subject: [SciPy-User] Suggestion for numpy.genfromtxt documentation In-Reply-To: <4ACF54E2.10409@gmail.com> References: <4ACC6CA1.63BA.009B.0@twdb.state.tx.us> <4ACCE9F2.4090002@gmail.com> <4ACF54E2.10409@gmail.com> Message-ID: On Oct 9, 2009, at 11:21 AM, Bruce Southey wrote: > On 10/09/2009 09:21 AM, Skipper Seabold wrote: As a disclaimer, I think some of you are misunderstanding the purpose of defaultfmt. It is meant to be used when fields are expected but no names are given, as a replacement of numpy's default "f%i". It is not meant to define new names. Think about it as a way to get around numpy's default. >> data = np.genfromtxt(s, delimiter=",") # dtype=float >> >> In [54]: data >> Out[54]: array([ 1., 2., 3.]) >> > Rats, I forgot about plain arrays. But this is a bug because the > default > argument is defaultfmt="f%i". Wait, "it's not a bug, it's a feature (TM)". Cf the disclaimer > But I this option is kept then I think the > default argument of defaultfmt should be None. The default is defaultfmt="f%i", just like numpy. >> If default names are specified then it doesn't seem to pick them up >> as >> of right now. >> >> s.seek(0) >> data = np.genfromtxt(s, delimiter=",", defaultfmt="Var%i") >> >> In [79]: data >> Out[79]: array([ 1., 2., 3.]) >> > This is also a bug. No, this works as expected: no names given (explicitly through `names` or implicitly with `names=True`), no names expected, explicit dtype (through the default `dtype=float`), so all is well. >>> ii) If names is only specified then contruct the dtype as ('name', >>> 'default format') >>> >> s.seek(0) >> data = np.genfromtxt(s, delimiter=",", names=['var1','var2','var3']) >> #dtype = float >> >> In [57]: data >> Out[57]: >> array((1.0, 2.0, 3.0), >> dtype=[('var1', '> > Excellent as what I expected. >>> iii) If formats is only specified then construct the dtype as >>> ('default >>> name', 'format') >>> >> This doesn't seem to work with the new easy dtype as noted above. >> >> But this does >> >> data = np.genfromtxt(s, delimiter=",", dtype=(int,int,float), >> defaultfmt="var%i") >> >> In [72]: data >> Out[72]: >> array((1, 2, 3.0), >> dtype=[('var0', '>> v) If no dtype, names and formats are only specified then >>> construct the >>> dtype as ('default name', 'default format') >>> >>> >> Same case as above I think where >> >> s.seek(0) >> data = np.genfromtxt(s, delimiter=",", defaultfmt="var%i") >> >> doesn't work as "expected" to zip float (the default format) with the >> default name, specified by defaultfmt. I appreciate the quotes around expected. `defaultfmt` is used only if a name is expected but can't be found. Here, no names are expected because of the default `dtype=float` works. >>> vi) If dtype and names or formats are specified then use dtype if >>> it is >>> of the form ('name', 'format') or use one of the previous cases. >>> >>> >> This seems to be the case for defaultfmt, >> >> s.seek(0) >> data = np.genfromtxt(s, >> dtype=[('var1',int),('var2',int),('var3',float)], delimiter=",", >> defaultfmt="VAR%i") >> >> In [99]: data >> Out[99]: >> array((1, 2, 3.0), >> dtype=[('var1', '> >> But if names is specified, then it's never ignored >> >> s.seek(0) >> data = np.genfromtxt(s, >> dtype=[('var1',int),('var2',int),('var3',float)], delimiter=",", >> names=['VAR1','VAR2','VAR3']) >> >> In [102]: data >> Out[102]: >> array((1, 2, 3.0), >> dtype=[('VAR1', '> >> > Here the problem is which user input overrides the other. As long as > it > is clearly documented what happens then I do not care (I care when > things are not stated). OK, I need to improve the documentation here. Yes, giving `names` overwrite the names given in dtype. >>> When dtype is None this implies format is None so the format is >>> obtained >>> from the data. If names is not True then the names are either from >>> the >>> argument or default values. Yes, `dtype=None` means that the format has to be defined from the data themselves. If the resulting dtype turns out to be structured, you will need names. If `names=True` they're read from the first valid line. If `names` is given, then use those ones. If `names=None`, then construct some from `defaulfmt`. > > But I think that this could still be handled by the names argument. So > that if a user does not specify any name (name=None) and no dtype (or > all columns have the same dtype) then we have to return a plain array. Nope, it was never the case before: you're returning what can be estimated from the data. "1, 1, 1" would give you ints, "1., 1., 1." would give you floats, "1, 1., x" will give you (int,float,'|S1') >>> If names argument is True then the names should be read from the >>> data >>> and one of the previous cases apply. >>> >>> >> It's a bit confusing to think of data type "formats" and have the >> defaultfmt, perhaps it should be defaultnm? well, defaultfmt is a format string, so it should be clear that it's used to format names. >> > I agree. With formats, I expect things like different character and > numeric types. If we can add this to the names argument then we should > not need it. >> So in sum, I think we should maybe have a True argument for >> defaultfmt, maybe change the name to defaultnm to avoid confusion, >> and >> have it so the easy dtype construction works with defaultfmt. I will >> comment on the open tickets. No. "Leave `defaultfmt` alone !"... From jkington at wisc.edu Fri Oct 9 14:40:47 2009 From: jkington at wisc.edu (Joe Kington) Date: Fri, 9 Oct 2009 13:40:47 -0500 Subject: [SciPy-User] Forced derivative interpolation?? In-Reply-To: References: <7025675161964DF7ACE12FF253F15D32@innova.locale> Message-ID: Hi Marco, First off, take a close look at what Anne recommended. It's probably much more efficient to do this using another method if there is one. Also, as Robert said, please reply using the title of your original post, not the title of the digest. It makes things much easier to find! :) Keep in mind that there are many ways to do exactly what you've described. As with any interpolation method, there are an infinite number of "right" ways to do it that will all yield different results. The simplest way to do this is to fit a piecewise cubic function to each pair of points and derivatives. (In other words, a cubic spline, but that can mean many things) So what we have is position (x) as a function of time(t): x(t) = At^3 + Bt^2 + Ct + D For each pair of points(t_0, x_0, t_1, x_1) and their derivatives (v_0, v_1), we want to solve for A,B,C,&D such that: x_0 = At_0^3 + Bt_0^2 + Ct_0 + D x_1 = At_1^3 + Bt_1^2 + Ct_1 + D v_0 = 3At_0^3 + 2Bt_0 + C v_1 = 3At_1^3 + 2Bt_1 + C Now, I'm lazy and don't particularly feel like doing the algebra, so let's just put this in matrix form: |x_0| | t_0^3 t_0^2 t_0 0| |A| |x_1| = | t_1^3 t_1^2 t_1 0| * |B| |v_0| |3t_0^2 2t_0 1 0| |C| |v_1| |3t_1^2 2t_1 1 0| |D| Which we can express as d = G*m and solve with numpy. Now this isn't particularly elegant code, and I apologize for the lack of comments (and general lack of clarity), but it should work as an example. See the attatched image for an example plot. import scipy as sp import numpy as np from matplotlib import pyplot as plt def main(): t = np.arange(11) x = np.random.randn(t.size) v = np.random.randn(t.size) ti = np.linspace(-1,11,100) xi = interpolate(t, x, v, ti) plot(ti, xi, t, x, v) def interpolate(t, x, v, ti): """Given a set of points and the expected derivatives at those points, fit a cubic spline using the points and derivaties as constraints and return the interpolated values for each point in "ti" Input: t: The independent variable at each observed point x: The dependent variable at each observed point v: The derivate dx/dt at each observed point ti: The (new) t-values to solve for x at Output: xi: The interpolated x-values """ # Note: there's probably a more elegant way of doing this... xi = np.zeros(ti.shape) for i in range(1, len(t)): t0, t1 = t[i-1], t[i] G = np.array([ [t0**3, t0**2, t0, 1], [t1**3, t1**2, t1, 1], [3 * t0**2, 2 * t0, 1, 0], [3 * t1**2, 2 * t1, 1, 0], ]) d = np.array([x[i-1], x[i], v[i-1], v[i]]) m = np.linalg.solve(G,d) # One way of setting boundary conditions... if i == 1: # Points to the left of our data range interval = ti <= t[i] elif i == len(t)-1: # Points to the right of our data range interval = ti >= t[i-1] else: # Within bounds of data interval = (ti >= t[i-1]) & (ti <= t[i]) t_int = ti[interval] xi[interval] = m[0] * t_int**3 + m[1] * t_int**2 + m[2] * t_int + m[3] return xi def plot(x,y, x0, y0, velocities): plt.figure() plt.hold(True) plt.plot(x, y, 'b-') plot_slopes(x0, y0, velocities) plt.plot(x0, y0, 'ro') plt.title('Fitting a spline using derivatives as constraints') plt.xlabel('Time') plt.ylabel('Position') plt.show() def plot_slopes(x, y, velocities): from math import atan, cos, sin marker_width = np.diff(x).mean() / 4 for x0, y0, v in zip(x,y,velocities): xdist = marker_width * cos(atan(v)) ydist = marker_width * sin(atan(v)) xleft, xright = x0 - xdist, x0 + xdist yleft, yright = y0 - ydist, y0 + ydist plt.plot([xleft, xright], [yleft, yright], 'r-') main() On Wed, Oct 7, 2009 at 2:44 AM, Marco Nicoletti < nicoletti at consorzio-innova.it> wrote: > Dear Joe, > > what I want to do is to interpolate the position array adding the > constrain about the first derivative (the velocity array) whiche I > already have. > > An example: I have t = [0,2,4,6] (the definition domain), p(t) = > [10,13,16,18] (the position array) and v(t) = [1.1, 0.8, 0.7, 0.4]. > I have this new domain t1 = [0,1,2,3,4,5,6] and I want to obtain p1(t1) > imposing the constrain about the velocities v1(t) = v(t). > I want the spline interpolation. > > Any ideas? > > Thanks in advanced for your advices. > > Marco Nicoletti > On Wed, Sep 30, 2009 at 2:12 PM, Anne Archibald wrote: > 2009/9/30 Marco Nicoletti : > > Dear all, > > > > I want to implement a spline interpolation forcing the condition on the > > first or second derivative. > > In other words I have a vector of position (p), velocity (v) and > > acceleration (a) values; > > I want to interpolate the position (p) vector imposing the conditions on > the > > velocity and acceleration values. > > > > The class UnivariateSpline() or intrp1D() in scipy.interpolate package > don't > > take as parameter the derivatives > > (they export a method to evaluate derivatives). > > > > Any suggestions? > > If I have correctly understood your question, what you want to do is > produce an interpolating spline with not just specified point values > but specified derivative values at the given points. Scipy has at > least two different pieces of code that might help. The first is, in > recent versions of scipy, scipy.interpolate.PiecewisePolynomial. This > allows you to fit a piecewise polynomial through a set of points, > specifying derivatives at each point. It doesn't allow you to impose a > spline-like constraint that higher derivatives must be continuous at > the points. Its evaluation is also implemented in pure python, so it > won't be terribly fast. > > A second option, useful if you need fast evaluation, is to abuse > scipy's spline functions. scipy.interpolate.splrep doesn't take > derivatives, but what it returns is a triple t, c, k. Given a t, c, k, > you can then call splev, splint, splder, etcetera to get nice fast > evaluation in compiled code. So what you can do is fabricate your own > t, c, and k values. t is the list of knots, c is some sort of > coefficients, and k is the order of the spline. The brute-force way I > found to get these splines to produce the derivatives I wanted > required me to repeat values in the t array. But once you've fixed the > t array, the result is linear in the c values, so a little trial and > error will give you formulas to produce any curve you need. > > Good luck, > Anne > > Thanks very much and have a nice day! > > > > Marco Nicoletti > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: spline_derivative_example.png Type: image/png Size: 33686 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: forced_derivative_interpolation.py Type: text/x-python Size: 2371 bytes Desc: not available URL: From ralf.gommers at googlemail.com Fri Oct 9 14:48:45 2009 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 9 Oct 2009 20:48:45 +0200 Subject: [SciPy-User] a small example of scipy.ndimage.map_coordinates In-Reply-To: <66fc52dd-4c61-4bcb-8902-1a566aee5b34@x37g2000yqj.googlegroups.com> References: <74bd6125-954d-4ae3-bef3-791a81b0fb67@y21g2000yqn.googlegroups.com> <1254943717.4757.19.camel@idol> <49d6b3500910071232k5ff25d19mdc100dc54aadb709@mail.gmail.com> <66fc52dd-4c61-4bcb-8902-1a566aee5b34@x37g2000yqj.googlegroups.com> Message-ID: On Fri, Oct 9, 2009 at 3:46 PM, denis wrote: > Folks, > > On pydocweb: > thanks, but Django ?! > Despite the nice pydocweb/doc/installation.rst, > what are the chances that a dummy > (me, old guy raised on paper doc with no install) > could install Django, get through pages of setup, > and live to tell the tale ? > I just prefer minimal packages to big ones -- Django, Enthought -- > with an avalanche of requires. > rest-to-html > my.html, refresh my.html in the browser is not > realtime, but. > > Well you're asking for a local version of a very nice web app with a lot of features. Getting similar functionality with a minimal package is not very likely to happen. > On Markdown: > although I (old guy) dislike most GUIs, I've gotton to like > the Markdown editor in mechanicalkern / stackoverflow. > Once you get used to realtime, you want it; > and the half-dozen buttons for \ etc are good. > Look at a stackoverflow discussion vs one in gmail -- form does > affect function. > > It's still not clear to me how mechanicalkern -> mainline scipy doc > will work, > perhaps a new thread on that in scipy-dev ? > mechanicalkern is a Q&A site, pydocweb is a wiki - that's a fundamental difference. Once a Q&A thread concludes and the best answer fits into the docs, it can just be copied over to the wiki. Who does this and when is not very well defined. Nor is this very easy to do. Cheers, Ralf > > cheers > -- denis > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amenity at enthought.com Fri Oct 9 17:59:55 2009 From: amenity at enthought.com (Amenity Applewhite) Date: Fri, 9 Oct 2009 16:59:55 -0500 Subject: [SciPy-User] October 16 Scientific Computing with Python Webinar: Traits References: <1874882496.1255125323830.JavaMail.root@p2-ws606.ad.prodcc.net> Message-ID: Having trouble viewing this email? Click here Friday, October 16: Traits SCIENTIFIC COMPUTING WITH PYTHON WEBINAR Hello! It's already time for our October Scientific Computing with Python webinar! This month we'll be handling Traits, one of our most popular training topics. Traits: Expanding the Power of Attributes An essential component of the open source Enthought Tool Suite, The Traits package is at the center of all development we do at Enthought. In fact, it has changed the mental model we use for programming in the already extremely efficient Python programming language. Briefly, a trait is a type definition that can be used for normal Python object attributes, giving the attributes some additional characteristics: initialization, validation, delegation, notification, and (optionally) visualization (GUIs). In this webinar we will provide an introduction to Traits by walking through several examples that show what you can do with Traits. Scientific Computing With Python Webinar: Traits October 16 1pm CDT/6pm UTC Register at GoToMeeting We hope to see you there! Also, don't forget that this free event is open to the public. Use the link at the bottom of this email to forward an invitation to your friends and colleagues. As always, feel free to contact us with questions, concerns, or suggestions for future webinar topics. Have a great weekend, The Enthought Team Enthought, Inc. Quick Links www.enthought.com code.enthought.com Facebook Blog Forward email This email was sent to leah at enthought.com by amenity at enthought.com. Update Profile/Email Address | Instant removal with SafeUnsubscribe? | Privacy Policy. Enthought, Inc. | 515 Congress Ave. | Suite 2100 | Austin | TX | 78701 -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis-bz-gg at t-online.de Mon Oct 12 07:25:36 2009 From: denis-bz-gg at t-online.de (denis) Date: Mon, 12 Oct 2009 04:25:36 -0700 (PDT) Subject: [SciPy-User] Forced derivative interpolation?? In-Reply-To: References: <7025675161964DF7ACE12FF253F15D32@innova.locale> Message-ID: An alternate component of array interpolate(), straight from Wikipedia (don't forget h !) -- def spline_2p2s( t, p0, p1, m0, m1, h=1 ): """ Hermite 2-point, 2-slope spline see http://en.wikipedia.org/wiki/Cubic_Hermite_spline """ # 0 -> p0, 1 -> p1, 1/2 -> (p0 + p1) / 2 - (m1 - m0) / 8 try: t2 = t*t t3 = t2*t return ( p0 * (2*t3 - 3*t2 + 1) + p1 * (-2*t3 + 3*t2) + m0 * h * (t3 - 2*t2 + t) + m1 * h * (t3 - t2) ) except ValueError: # shape mismatch: objects cannot be broadcast to a single shape # i.e. points, t both vecs (is there a better way ?) return [spline_2p2s( t, p0, p1, m0, m1, h ) for t in t.copy()] cheers -- denis From etrade.griffiths at dsl.pipex.com Mon Oct 12 07:55:18 2009 From: etrade.griffiths at dsl.pipex.com (Etrade Griffiths) Date: Mon, 12 Oct 2009 12:55:18 +0100 Subject: [SciPy-User] SCIPY FSOLVE Message-ID: <7jho38$4bad95@smtp.pipex.tiscali.co.uk> Hi I am trying to solve directly a series of equations describing flow in a network using FSOLVE but have not had much success so far. A small example is given below. I have a feeling that the problem may be ill-conditioned because there are two sources of small numbers in the equations: one associated with small coefficients and the other associated with small differences in pressure between adjacent nodes. The main problem seems to be that FSOLVE ends up with estimates of the variables that result in raising a negative number to a power, so that some of the residuals become -1.#IND in value. I tried trapping this type of error but FSOLVE does not appear to be able to move towards the solution. I would be grateful for any suggestions on how to "encourage" FSOLVE. Thanks in advance Alun Griffiths # ========================= # # Simple model of gathering network # Uses SCIPY FSOLVE to solve for network pressures directly # import math import scipy.optimize # Set up the system of network equations def NetworkEquations(x): # Transfer current guess to variables p2 = x[0] q1 = x[1] q2 = x[2] q3 = x[3] # Define constants c = [300.0, 2.00E-5, 1.50E-5] n = [0.50, 0.75, 0.70] p1 = 200.0 p3 = 2000.0 # Define the residuals residuals = [] # ... Kirchoff residuals.append( q1 - q2 - q3 ) # ... pressure drop along pipeline curr_err = q1 - ( c[0] * ( p2 ** 2 - p1 ** 2 ) ) ** n[0] residuals.append(curr_err) # ... pressure drop over well 1 curr_err = q2 - c[1] * ( p3 ** 2 - p2 ** 2 ) ** n[1] residuals.append(curr_err) # ... pressure drop over well 2 curr_err = q3 - c[2] * ( p3 ** 2 - p2 ** 2 ) ** n[2] residuals.append(curr_err) # All equations evaluated so return residuals return residuals # Solve equations using Broyden's method x0 = [250.0, 2.0, 1.0, 1.0] x1 = scipy.optimize.fsolve(NetworkEquations,x0) print x1 From josef.pktd at gmail.com Mon Oct 12 09:12:16 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 12 Oct 2009 09:12:16 -0400 Subject: [SciPy-User] SCIPY FSOLVE In-Reply-To: <7jho38$4bad95@smtp.pipex.tiscali.co.uk> References: <7jho38$4bad95@smtp.pipex.tiscali.co.uk> Message-ID: <1cd32cbb0910120612g3803efffgc7bc4a16a51b889b@mail.gmail.com> On Mon, Oct 12, 2009 at 7:55 AM, Etrade Griffiths wrote: > Hi > > I am trying to solve directly a series of equations describing flow > in a network using FSOLVE but have not had much success so far. ?A > small example is given below. > > I have a feeling that the problem may be ill-conditioned because > there are two sources of small numbers in the equations: one > associated with small coefficients and the other associated with > small differences in pressure between adjacent nodes. ?The main > problem seems to be that FSOLVE ends up with estimates of the > variables that result in raising a negative number to a power, so > that some of the residuals become -1.#IND in value. ?I tried trapping > this type of error but FSOLVE does not appear to be able to move > towards the solution. ?I would be grateful for any suggestions on how > to "encourage" FSOLVE. > > Thanks in advance > > Alun Griffiths > > # ========================= > # > # Simple model of gathering network > # Uses SCIPY FSOLVE to solve for network pressures directly > # > > import math > import scipy.optimize > > # Set up the system of network equations > > def NetworkEquations(x): > > ? ? # Transfer current guess to variables > > ? ? p2 = x[0] > ? ? q1 = x[1] > ? ? q2 = x[2] > ? ? q3 = x[3] > > ? ? # Define constants > > ? ? c = [300.0, 2.00E-5, 1.50E-5] > ? ? n = [0.50, 0.75, 0.70] > > ? ? p1 = 200.0 > ? ? p3 = 2000.0 > > ? ? # Define the residuals > > ? ? residuals = [] > > ? ? # ... Kirchoff > > ? ? residuals.append( q1 - q2 - q3 ) > > ? ? # ... pressure drop along pipeline > > ? ? curr_err = q1 - ( ?c[0] * ( p2 ** 2 - p1 ** 2 ) ) ** n[0] > ? ? residuals.append(curr_err) > > ? ? # ... pressure drop over well 1 > > ? ? curr_err = q2 - c[1] * ( p3 ** 2 - p2 ** 2 ) ** n[1] > ? ? residuals.append(curr_err) > > ? ? # ... pressure drop over well 2 > > ? ? curr_err = q3 - c[2] * ( p3 ** 2 - p2 ** 2 ) ** n[2] > ? ? residuals.append(curr_err) > > ? ? # All equations evaluated so return residuals > > ? ? return residuals > > > # Solve equations using Broyden's method > > x0 = [250.0, 2.0, 1.0, 1.0] > x1 = scipy.optimize.fsolve(NetworkEquations,x0) > print x1 > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > I never used optimize.fsolve, but in your case, I would try to reparameterize p2 so it has to be between between p1 and p3, p1<=p2<=p3 maybe as a fraction in interval [0,1] and then transform it to the interval [p1,p3] There might be other ways to impose the constraint, but my first attempt is usually reparameterization. Josef From warren.weckesser at enthought.com Mon Oct 12 09:50:12 2009 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Mon, 12 Oct 2009 08:50:12 -0500 Subject: [SciPy-User] SCIPY FSOLVE In-Reply-To: <7jho38$4bad95@smtp.pipex.tiscali.co.uk> References: <7jho38$4bad95@smtp.pipex.tiscali.co.uk> Message-ID: <4AD33414.1020207@enthought.com> It appears you are correct in your concern about the closeness of p1 and p2. You can avoid the problem of raising a negative number to a fractional power if you reformulate your residual. If you change this: curr_err = q1 - ( c[0] * ( p2 ** 2 - p1 ** 2 ) ) ** n[0] to this: curr_err = q1**(1/n[0]) - ( c[0] * ( p2 ** 2 - p1 ** 2 ) ) the script gives the answer: [ 200.00004794 2.39840655 1.77542113 0.62298543] Note that p2 is very close to p1, so it is not surprising that during the solver's iterations, p2**2 - p1**2 occasionally becomes negative. Warren Etrade Griffiths wrote: > Hi > > I am trying to solve directly a series of equations describing flow > in a network using FSOLVE but have not had much success so far. A > small example is given below. > > I have a feeling that the problem may be ill-conditioned because > there are two sources of small numbers in the equations: one > associated with small coefficients and the other associated with > small differences in pressure between adjacent nodes. The main > problem seems to be that FSOLVE ends up with estimates of the > variables that result in raising a negative number to a power, so > that some of the residuals become -1.#IND in value. I tried trapping > this type of error but FSOLVE does not appear to be able to move > towards the solution. I would be grateful for any suggestions on how > to "encourage" FSOLVE. > > Thanks in advance > > Alun Griffiths > > # ========================= > # > # Simple model of gathering network > # Uses SCIPY FSOLVE to solve for network pressures directly > # > > import math > import scipy.optimize > > # Set up the system of network equations > > def NetworkEquations(x): > > # Transfer current guess to variables > > p2 = x[0] > q1 = x[1] > q2 = x[2] > q3 = x[3] > > # Define constants > > c = [300.0, 2.00E-5, 1.50E-5] > n = [0.50, 0.75, 0.70] > > p1 = 200.0 > p3 = 2000.0 > > # Define the residuals > > residuals = [] > > # ... Kirchoff > > residuals.append( q1 - q2 - q3 ) > > # ... pressure drop along pipeline > > curr_err = q1 - ( c[0] * ( p2 ** 2 - p1 ** 2 ) ) ** n[0] > residuals.append(curr_err) > > # ... pressure drop over well 1 > > curr_err = q2 - c[1] * ( p3 ** 2 - p2 ** 2 ) ** n[1] > residuals.append(curr_err) > > # ... pressure drop over well 2 > > curr_err = q3 - c[2] * ( p3 ** 2 - p2 ** 2 ) ** n[2] > residuals.append(curr_err) > > # All equations evaluated so return residuals > > return residuals > > > # Solve equations using Broyden's method > > x0 = [250.0, 2.0, 1.0, 1.0] > x1 = scipy.optimize.fsolve(NetworkEquations,x0) > print x1 > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From Dharhas.Pothina at twdb.state.tx.us Mon Oct 12 10:13:45 2009 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Mon, 12 Oct 2009 09:13:45 -0500 Subject: [SciPy-User] Suggestion for numpy.genfromtxt documentation In-Reply-To: <4ACCE9F2.4090002@gmail.com> References: <4ACC6CA1.63BA.009B.0@twdb.state.tx.us> <4ACCE9F2.4090002@gmail.com> Message-ID: <4AD2F348.63BA.009B.0@twdb.state.tx.us> Hi All, Before I start I wanted to let all of you know that I really appreciate the work that has gone into genfromtxt. It is a hugely useful function that has become indispensable in my work. A lot of the problems I have in general come from the fact that I am a fairly new python/numpy user and don't always understand some of the intricacies involved. Just a disclaimer. I am not familiar enough with the way genfromtxt works to have understood the entire discussion that followed my posting, so I'm going to answer the questions I can answer. >>> Bruce Southey 10/7/2009 2:20 PM >>> >>> What did you actually expect? >>> It would be very informative if you could provide a simple example of >>> this for testing. Coming from a Matlab background the first thing I would have expected when given an option to read in (or otherwise define column) variables is a structure which lets me know what the name of each column is. In matlab this would be a variable say 'a' such that a.header is a list of header names and a.data has the data in a 2D array such that column 'n' has the data associated with a.header[n]. Now since I've become fairly used to the way python does things, my modified expectation is if I read a file with the data below: 10.0 20.1 30.7 10.0 30.2 40.3 20.1 21.3 67.5 ... with the command: a = np.genfromtxt(fname,usecols=(0,1,2),names='x,y,z') I should get a structured array such that a['x'] = np.array([10.0,10.0,20.1,...]) etc. If you would like a sample data file I can provide one. >>> There are many combinations of arguments so not all have been tested and >>> it is not always clear what the expected behavior should be. I think for me the confusion is in an initial lack of understanding on how dtypes work. If I type help np.genfromtxt in Ipython I get: names : {None, True, string, sequence}, optional If `names` is True, the field names are read from the first valid line after the first `skiprows` lines. If `names` is a sequence or a single-string of comma-separated names, the names will be used to define the field names in a flexible dtype. If `names` is None, the names of the dtype fields will be used, if any. My understanding of this was that the names argument would be used to define the field names. What I didn't realize is that if the dtype is not explicitly set (or set equal to None) then since all the data in the files are floats the dtype for the entire array is float rather than each column having its own dtype. So there are no column specific dtypes whose field names can be set to the values I specified and the file names I set are ignored (at least that's what I think is happening) To me the reason for having the 'names' argument is so that there is a mechanism to show what the names of each column are. The fact that it fails silently when the dtype is not specified is what was problematic. So my suggestion was to do one of the following: 1)add something in the docstring to note that dtype needs to be specified for the names argument to work 2) to change the way genfromtxt works to default to dtype=None when the 'names' argument is invoked without a dtype being specified. 3) issue some sort of warning/error >>> From the numpy help, there is this example: >>> data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'), >>> ('mystring','S5')], delimiter=",") >>> >>> It does not help that the dtype of structured arrays also includes the >>> actual name. So I do not think we can use dtype argument without using >>> the combination of dtype and name. Perhaps if dtype is split into names >>> and formats so that dtype=('name', 'format'). I think when I was reading the help. I was immediately drawn to the 'names' argument as the part of the function that would do what I needed it to. It was only a while later that I read through things more completely and worked out the connection to 'dtype' and also the fact that I could specify the field names through the 'dtype' argument as well. To me the combination of dtype=None & names='x,y,z' is more useful because I can give each column a name but let numpy figure out the format automatically without having to specify each column manually. - dharhas From denis-bz-gg at t-online.de Mon Oct 12 10:42:21 2009 From: denis-bz-gg at t-online.de (denis) Date: Mon, 12 Oct 2009 07:42:21 -0700 (PDT) Subject: [SciPy-User] idiom for iterators, expr(T) if isscalar(T) else array([expr(t) for t in T]) Message-ID: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> Some vectors are iterators: I want expr(T) if isscalar(T) \ else array([expr(t) for t in T]) Is there an idiom for this ? One can try: expr(T) except ValueError: # ValueError: shape mismatch: objects cannot be broadcast to a single shape array([ expr(t) for t in T ]) but this is ugly, and probably wrong in some cases even with no ValueError I'm afraid the broadcasting rules are just too complex for me / I'm too simple for the rules. For a real example, see spline_2p2s in the thread "Forced derivative interpolation". Thanks, cheers -- denis From peridot.faceted at gmail.com Mon Oct 12 11:36:30 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 12 Oct 2009 11:36:30 -0400 Subject: [SciPy-User] Forced derivative interpolation?? In-Reply-To: References: <7025675161964DF7ACE12FF253F15D32@innova.locale> Message-ID: Actually, there is code implementing pretty much what the OP requested already in scipy: scipy.interpolate.piecewise_polynomial and scipy.interpolate.KroghInterpolator. This only allows exact specification of point and derivative information, not least-squares fitting, and it doesn't necessarily support all the features that splrep does, but the heavy lifting is done in compiled code. Anne 2009/10/12 denis : > An alternate component of array interpolate(), straight from Wikipedia > (don't forget h !) -- > > def spline_2p2s( t, p0, p1, m0, m1, h=1 ): > ? ?""" Hermite 2-point, 2-slope spline > ? ? ? ?see http://en.wikipedia.org/wiki/Cubic_Hermite_spline > ? ?""" > ? ? ? ?# 0 -> p0, ?1 -> p1, ?1/2 -> (p0 + p1) / 2 ?- ?(m1 - m0) / 8 > ? ?try: > ? ? ? ?t2 = t*t > ? ? ? ?t3 = t2*t > ? ? ? ?return ( > ? ? ? ? ?p0 * (2*t3 - 3*t2 + 1) > ? ? ? ?+ p1 * (-2*t3 + 3*t2) > ? ? ? ?+ m0 * h * (t3 - 2*t2 + t) > ? ? ? ?+ m1 * h * (t3 - t2) ) > ? ?except ValueError: > ? ? ? ?# shape mismatch: objects cannot be broadcast to a single > shape > ? ? ? ?# i.e. points, t both vecs (is there a better way ?) > ? ? ? ?return [spline_2p2s( t, p0, p1, m0, m1, h ) for t in t.copy()] > > cheers > ?-- denis > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From bsouthey at gmail.com Mon Oct 12 11:41:51 2009 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 12 Oct 2009 10:41:51 -0500 Subject: [SciPy-User] Suggestion for numpy.genfromtxt documentation In-Reply-To: <4AD2F348.63BA.009B.0@twdb.state.tx.us> References: <4ACC6CA1.63BA.009B.0@twdb.state.tx.us> <4ACCE9F2.4090002@gmail.com> <4AD2F348.63BA.009B.0@twdb.state.tx.us> Message-ID: <4AD34E3F.8010107@gmail.com> On 10/12/2009 09:13 AM, Dharhas Pothina wrote: > Hi All, > > Before I start I wanted to let all of you know that I really appreciate the work that has gone into genfromtxt. It is a hugely useful function that has become indispensable in my work. A lot of the problems I have in general come from the fact that I am a fairly new python/numpy user and don't always understand some of the intricacies involved. > > Just a disclaimer. I am not familiar enough with the way genfromtxt works to have understood the entire discussion that followed my posting, so I'm going to answer the questions I can answer. > > >>>> Bruce Southey 10/7/2009 2:20 PM>>> >>>> What did you actually expect? >>>> It would be very informative if you could provide a simple example of >>>> this for testing. >>>> > Coming from a Matlab background the first thing I would have expected when given an option to read in (or otherwise define column) variables is a structure which lets me know what the name of each column is. In matlab this would be a variable say 'a' such that a.header is a list of header names and a.data has the data in a 2D array such that column 'n' has the data associated with a.header[n]. > > Now since I've become fairly used to the way python does things, my modified expectation is if I read a file with the data below: > > 10.0 20.1 30.7 > 10.0 30.2 40.3 > 20.1 21.3 67.5 > ... > > with the command: a = np.genfromtxt(fname,usecols=(0,1,2),names='x,y,z') > > I should get a structured array > > such that a['x'] = np.array([10.0,10.0,20.1,...]) > > etc. > See Pierre's comments because genfromtxt outputs either a plain array type or a structured array - which is what I overlooked. So genfromtxt provides a plain array type by default where there are no named columns and thus names do not have an effect. If you want named columns then you have to get genfromtxt to give a structured array - see Skipper's examples on when that happens. > If you would like a sample data file I can provide one. > Small self contained examples are always very useful especially when there some thing is not as expected. >>>> There are many combinations of arguments so not all have been tested and >>>> it is not always clear what the expected behavior should be. >>>> > I think for me the confusion is in an initial lack of understanding on how dtypes work. If I type help np.genfromtxt in Ipython I get: > > names : {None, True, string, sequence}, optional > If `names` is True, the field names are read from the first valid line > after the first `skiprows` lines. > If `names` is a sequence or a single-string of comma-separated names, > the names will be used to define the field names in a flexible dtype. > If `names` is None, the names of the dtype fields will be used, if any. > > My understanding of this was that the names argument would be used to define the field names. What I didn't realize is that if the dtype is not explicitly set (or set equal to None) then since all the data in the files are floats the dtype for the entire array is float rather than each column having its own dtype. So there are no column specific dtypes whose field names can be set to the values I specified and the file names I set are ignored (at least that's what I think is happening) > > To me the reason for having the 'names' argument is so that there is a mechanism to show what the names of each column are. The fact that it fails silently when the dtype is not specified is what was problematic. So my suggestion was to do one of the following: > > 1)add something in the docstring to note that dtype needs to be specified for the names argument to work > 2) to change the way genfromtxt works to default to dtype=None when the 'names' argument is invoked without a dtype being specified. > 3) issue some sort of warning/error > > >>>> From the numpy help, there is this example: >>>> data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'), >>>> ('mystring','S5')], delimiter=",") >>>> >>>> It does not help that the dtype of structured arrays also includes the >>>> actual name. So I do not think we can use dtype argument without using >>>> the combination of dtype and name. Perhaps if dtype is split into names >>>> and formats so that dtype=('name', 'format'). >>>> > I think when I was reading the help. I was immediately drawn to the 'names' argument as the part of the function that would do what I needed it to. It was only a while later that I read through things more completely and worked out the connection to 'dtype' and also the fact that I could specify the field names through the 'dtype' argument as well. To me the combination of dtype=None& names='x,y,z' is more useful because I can give each column a name but let numpy figure out the format automatically without having to specify each column manually. > > - dharhas > > > I do agree that the documentation is really behind the functionality of genfromtxt and thus gets confusing. But both Skipper's and Pierre's comments have really cleared many of these points up. The documentation needs work but also we need people to test it and indicate when things are not as they expected. If it is documentation then we can address that issue in the documentation probably using a special help page on using genfromtxt with all the different cases that Skipper provided. Bruce From pgmdevlist at gmail.com Mon Oct 12 12:56:31 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 12 Oct 2009 12:56:31 -0400 Subject: [SciPy-User] Suggestion for numpy.genfromtxt documentation In-Reply-To: <4AD34E3F.8010107@gmail.com> References: <4ACC6CA1.63BA.009B.0@twdb.state.tx.us> <4ACCE9F2.4090002@gmail.com> <4AD2F348.63BA.009B.0@twdb.state.tx.us> <4AD34E3F.8010107@gmail.com> Message-ID: <615F65EF-B694-4BD2-8E1E-2F956CF1174E@gmail.com> All, You'll be happy to learn that I am working on some additional docs for genfromtxt. I introduced some new features (`filling_values`, `skip_footer`), deprecated some others (`missing` should be `missing_values`, `skip_row` should be `skip_header`), modified how some of the arguments worked (`missing_values` doesn't have to be a dictionary any more), to such an extent that it may get overwhelming for a new user. Moreover, trying to fit every single bit of information in the docstring of the function is maybe a tad counter- productive. So, expect some kind of draft in a few days. I'm not sure where to put it yet (probably right in `numpy/doc`) nor to how to point to it. I'll let our doc specialists decide what to do. Cheers P. From etrade.griffiths at dsl.pipex.com Mon Oct 12 13:00:54 2009 From: etrade.griffiths at dsl.pipex.com (Etrade Griffiths) Date: Mon, 12 Oct 2009 18:00:54 +0100 Subject: [SciPy-User] SCIPY FSOLVE In-Reply-To: References: Message-ID: <7lcd7l$8lsofg@smtp.pipex.tiscali.co.uk> >It appears you are correct in your concern about the closeness of p1 and >p2. You can avoid the problem of raising a negative number to a >fractional power if you reformulate your residual. > >If you change this: > > curr_err = q1 - ( c[0] * ( p2 ** 2 - p1 ** 2 ) ) ** n[0] > >to this: > > curr_err = q1**(1/n[0]) - ( c[0] * ( p2 ** 2 - p1 ** 2 ) ) > >the script gives the answer: > >[ 200.00004794 2.39840655 1.77542113 0.62298543] > >Note that p2 is very close to p1, so it is not surprising that during >the solver's iterations, p2**2 - p1**2 occasionally becomes negative. Warren thanks for the suggestion - that could well be the answer! Will try it in my larger model Best regards Alun Griffiths From stablum at gmail.com Mon Oct 12 13:59:46 2009 From: stablum at gmail.com (Francesco Stablum) Date: Mon, 12 Oct 2009 19:59:46 +0200 Subject: [SciPy-User] how to interpret numpy.fft.fftfreq output? Message-ID: <377fd1f30910121059u51313e6bo93223f4ab49ad8de@mail.gmail.com> Hello, I am doing the fft of a wave file (44100 Hz) with some kind of success. I have a problem: I have to display the frequencies amounts and I have to calculate which frequency corresponds to the indexes of the result of the fft. I am actually using the fftfreq function in this way: fft.fftfreq(5120,d=1.0/22050.0) (5120 is number of frames). The result is that the second half of the array is negative. I have seen in the documentation that it is normal... why? I would expect to have an array like this: [0,5, .. , 22045,2050]. How should interpret the output of fftfreq? thanks for the attention, Francesco Stablum -- The generation of random numbers is too important to be left to chance - Robert R. Coveyou From silva at lma.cnrs-mrs.fr Mon Oct 12 15:38:55 2009 From: silva at lma.cnrs-mrs.fr (Fabricio Silva) Date: Mon, 12 Oct 2009 21:38:55 +0200 Subject: [SciPy-User] how to interpret numpy.fft.fftfreq output? In-Reply-To: <377fd1f30910121059u51313e6bo93223f4ab49ad8de@mail.gmail.com> References: <377fd1f30910121059u51313e6bo93223f4ab49ad8de@mail.gmail.com> Message-ID: <1255376335.2086.12.camel@PCTerrusse> Le lundi 12 octobre 2009 ? 19:59 +0200, Francesco Stablum a ?crit : > Hello, > > I am doing the fft of a wave file (44100 Hz) with some kind of success. > I have a problem: I have to display the frequencies amounts and I have > to calculate which frequency corresponds to the indexes of the result > of the fft. I am actually using the fftfreq function in this way: > > fft.fftfreq(5120,d=1.0/22050.0) > > (5120 is number of frames). > > The result is that the second half of the array is negative. I have > seen in the documentation that it is normal... why? > I would expect to have an array like this: [0,5, .. , 22045,2050]. > > How should interpret the output of fftfreq? As you may know, Fourier transform of digital signals is aliased, i.e. fft is Fe-periodic. Due to this fact, you may need to filter your signal so that its frequential bandwidth is limited within (-Fe/2, Fe/2): it is what Shannon theorem claims. It also means that all the frequential information about a digital signal is contained within that interval, or if you prefer (0,Fe) as the part in (Fe/2, Fe) is the same as (-Fe/2, 0) because of the periodicity. To get the frequencies vector associated to the output of np.fft.fft, two solutions: >>> s_f = np.fft.fft(s) >>> freq = np.linspace(0, Fe, len(s_f)) # frequencies in (0, Fe) or >>> freq = np.fft.fftfreq(len(s_f), 1./Fe) # frequencies in (0 .. Fe/2, -Fe/2 .. 0) I suppose you are only interested in the (0, Fe/2) interval because your signals are real (and the fft is then Hermite-symmetric). You may then use np.fft.rfft. -- Fabrice Silva Laboratory of Mechanics and Acoustics (CNRS, UPR 7051) From perfreem at gmail.com Mon Oct 12 17:25:39 2009 From: perfreem at gmail.com (per freem) Date: Mon, 12 Oct 2009 17:25:39 -0400 Subject: [SciPy-User] performance of scipy: potential inefficiency in logsumexp and sampling from multinomial Message-ID: hi all, i have a piece of code that relies heavily on sampling from multinomial distributions and using their results to compute log probabilities. my code makes heavy use of 'multinomial' from scipy, and of 'logsumexp'. my code is unusually slow, and profiling it with Python's "cPickle" module reveals that most of the time is spent in the following functions: 479.524 0.000 code.py:211(my_func) 122.682 0.000 /Library/Python/2.5/site-packages/scipy/maxentropy/maxentutils.py:27(logsumexp) 40.645 0.000 /Library/Python/2.5/site-packages/numpy/core/numeric.py:180(asarray) 20.374 0.000 {method 'max' of 'numpy.ndarray' objects} (the first column represents cumulative time, the second is percall time.) my code (listed as 'my_func' above) essentially computes a list of log probabilities, exponentiates them and renormalizes them (using 'logsumexp') and then samples from a multinomial distribution using those probabilities as a parameter. i then check to see which object came up true from the multinomial sample. here's a sketch of the code: def my_func(my_list, n_items) final_list = [] for n in xrange(n_items): prob = my_dict[(my_list(n), n)] final_list.append(prob) final_list = final_list - logsumexp(final_list) sample = multinomial(1, exp(final_list)) sample_index = list(sampled_reassignment).index(1) return sample_index the list 'my_list' usually has around 3 to 5 elements in it, and 'my_dict' has about 500-1000 keys. this function gets called about 1.5 million times in my code, and it takes about 5 minutes, which seems very long relative to these operations. (i'd like to scale this up to a case where the function is called about 10-120 million times.) are there known efficiency issues with logsumexp? it seems like it should be a very cheap operation. also, 'multinomial' ought to be relatively cheap, i believe. does anyone have any ideas on how this can be optimized? any input will be greatly appreciated. i am also open to using cython if that is likely to make a significant improvement in this case. also, what is likely to be the origin of the call to "asarray"? (i am not explicitly calling that function, it must be indirectly via some other function.) thanks very much. From peter.cimermancic at gmail.com Mon Oct 12 22:47:06 2009 From: peter.cimermancic at gmail.com (=?UTF-8?Q?Peter_Cimerman=C4=8Di=C4=8D?=) Date: Mon, 12 Oct 2009 19:47:06 -0700 Subject: [SciPy-User] Problem with combining Fsolve and Integrate.Odeint Message-ID: <18d53ca60910121947m1e82d81cie8efad9878e85918@mail.gmail.com> Hi all, I'm trying to model system that is described with few ODEs. Function, where ODEs are in, is given as def function(y,t). It takes two arguments as you can see. y is an array of different species in the model, whereas t is an array of time steps. Then, I'd like to calculate steady state using fsolve, which takes function with one argument only. When trying to solve steady state, this error is raised: "TypeError: There is a mismatch between the input and output shape of diff_equations.". How could I solve my problem? Thank you in advance, Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian.walter at gmail.com Tue Oct 13 06:59:19 2009 From: sebastian.walter at gmail.com (Sebastian Walter) Date: Tue, 13 Oct 2009 12:59:19 +0200 Subject: [SciPy-User] Problem with combining Fsolve and Integrate.Odeint In-Reply-To: <18d53ca60910121947m1e82d81cie8efad9878e85918@mail.gmail.com> References: <18d53ca60910121947m1e82d81cie8efad9878e85918@mail.gmail.com> Message-ID: kind of hard to help with that little information: you should _always_ include the source code + output, preferably a simplified example that still shows the error. On Tue, Oct 13, 2009 at 4:47 AM, Peter Cimerman?i? wrote: > Hi all, > I'm trying to model system that is described with few ODEs. Function, where > ODEs are in, is given as def function(y,t). It takes two arguments as you > can see. y is an array of different species in the model, whereas t is an > array of time steps. Then, I'd like to calculate steady state using fsolve, > which takes function with one argument only. When trying to solve steady > state, this error is raised: "TypeError: There is a mismatch between the > input and output shape of diff_equations.". How could I solve my problem? > Thank you in advance, > Peter > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From jr at sun.ac.za Tue Oct 13 07:01:25 2009 From: jr at sun.ac.za (Johann Rohwer) Date: Tue, 13 Oct 2009 13:01:25 +0200 Subject: [SciPy-User] Problem with combining Fsolve and Integrate.Odeint Message-ID: <200910131301.25921.jr@sun.ac.za> Sending again - my previous reply doesn't seem to have made it to the list... On Tuesday 13 October 2009, Peter Cimerman?i? wrote: > I'm trying to model system that is described with few ODEs. > Function, where ODEs are in, is given as def function(y,t). It > takes two arguments as you can see. y is an array of different > species in the model, whereas t is an array of time steps. Then, > I'd like to calculate steady state using fsolve, which takes > function with one argument only. When trying to solve steady state, > this error is raised: "TypeError: There is a mismatch between the > input and output shape of diff_equations.". How could I solve my > problem? Here is a self-contained example of 2 simple ODEs with mass action kinetics, that calculates a time course with odeint and then uses the final point of the time course as an initial estimate for a steady- state calculation with fsolve. As you can see, the arguments of the function de are handled OK between odeint and fsolve. BTW if you are interested in this problem in a more general way, you might want to look at our PySCeS software (http://pysces.sf.net) which is for simulation of (bio)chemical kinetic networks. It has high-level functions to calculate time courses and steady states automatically (amongst others) and runs on top of scipy. Regards Johann ---------------------8-<-------------------------------- import scipy scipy.pkgload('optimize', 'integrate') import pylab k1 = 10 k2 = 5 k3 = 8 def de(X,t): Xdot = scipy.zeros((2),'d') Xdot[0] = k1 - k2*X[0] Xdot[1] = k2*X[0] - k3*X[1] return Xdot init_val = scipy.array([10.,0.1]) t_range = scipy.linspace(0,1,21) t_course = scipy.integrate.odeint(de, init_val, t_range) fin_t_course = scipy.copy(t_course[-1]) ss = scipy.optimize.fsolve(de, fin_t_course, args=None) print ss pylab.plot(t_range, t_course[:,0], t_range, t_course[:,1]) pylab.show() From peter.cimermancic at gmail.com Tue Oct 13 11:43:39 2009 From: peter.cimermancic at gmail.com (=?UTF-8?Q?Peter_Cimerman=C4=8Di=C4=8D?=) Date: Tue, 13 Oct 2009 08:43:39 -0700 Subject: [SciPy-User] Problem with combining Fsolve and Integrate.Odeint In-Reply-To: <200910131301.25921.jr@sun.ac.za> References: <200910131301.25921.jr@sun.ac.za> Message-ID: <18d53ca60910130843p21c4fdf2u36c99ceedfb01b72@mail.gmail.com> Thank you. That helped me a lot. I'd like to calculate steady-state concentrations at different concentration of particular specie in my model too. Whereas it worked perfectly up to some concentration, it raised me this warning (Warning: The iteration is not making good progress, as measured by the improvement from the last ten iterations.) from that concentration on. Do you know, how could I improve that? And I will definitely check pysces. It looks exactly what I need. Cheers, Peter On Tue, Oct 13, 2009 at 4:01 AM, Johann Rohwer wrote: > Sending again - my previous reply doesn't seem to have made it to the > list... > > On Tuesday 13 October 2009, Peter Cimerman?i? wrote: > > I'm trying to model system that is described with few ODEs. > > Function, where ODEs are in, is given as def function(y,t). It > > takes two arguments as you can see. y is an array of different > > species in the model, whereas t is an array of time steps. Then, > > I'd like to calculate steady state using fsolve, which takes > > function with one argument only. When trying to solve steady state, > > this error is raised: "TypeError: There is a mismatch between the > > input and output shape of diff_equations.". How could I solve my > > problem? > > Here is a self-contained example of 2 simple ODEs with mass action > kinetics, that calculates a time course with odeint and then uses the > final point of the time course as an initial estimate for a steady- > state calculation with fsolve. As you can see, the arguments of the > function de are handled OK between odeint and fsolve. > > BTW if you are interested in this problem in a more general way, you > might want to look at our PySCeS software (http://pysces.sf.net) which > is for simulation of (bio)chemical kinetic networks. It has high-level > functions to calculate time courses and steady states automatically > (amongst others) and runs on top of scipy. > > Regards > Johann > > ---------------------8-<-------------------------------- > > import scipy > scipy.pkgload('optimize', 'integrate') > import pylab > > k1 = 10 > k2 = 5 > k3 = 8 > > def de(X,t): > Xdot = scipy.zeros((2),'d') > Xdot[0] = k1 - k2*X[0] > Xdot[1] = k2*X[0] - k3*X[1] > return Xdot > > init_val = scipy.array([10.,0.1]) > t_range = scipy.linspace(0,1,21) > > t_course = scipy.integrate.odeint(de, init_val, t_range) > fin_t_course = scipy.copy(t_course[-1]) > > ss = scipy.optimize.fsolve(de, fin_t_course, args=None) > print ss > > pylab.plot(t_range, t_course[:,0], t_range, t_course[:,1]) > pylab.show() > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Tue Oct 13 11:44:42 2009 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 13 Oct 2009 10:44:42 -0500 Subject: [SciPy-User] Problem with combining Fsolve and Integrate.Odeint In-Reply-To: <200910131301.25921.jr@sun.ac.za> References: <200910131301.25921.jr@sun.ac.za> Message-ID: <4AD4A06A.2010200@enthought.com> Hi Johann, You can do what you want by using the `args` argument of both fsolve and odeint. I have rewritten your example to do use `args`, and made several other changes that are more about my preferences than about getting it working. In particular, if the differential equations depend on the parameters k1, k2, and k3, these should really be arguments to the function, not global variables. Like fsolve, odeint also has an `args` argument that allows you to do this. Heres the code: ---------- from numpy import empty, array, linspace from scipy.optimize import fsolve from scipy.integrate import odeint import pylab def de(X, t, k1, k2, k3): Xdot = empty((2),'d') Xdot[0] = k1 - k2*X[0] Xdot[1] = k2*X[0] - k3*X[1] return Xdot if __name__ == "__main__": k1 = 10 k2 = 5 k3 = 8 init_val = array([10.,0.1]) t_range = linspace(0,1,21) t_course = odeint(de, init_val, t_range, args=(k1,k2,k3)) fin_t_course = t_course[-1] ss = fsolve(de, fin_t_course, args=(0, k1, k2, k3)) print ss pylab.plot(t_range, t_course[:,0], t_range, t_course[:,1]) pylab.show() ---------- Warren Johann Rohwer wrote: > Sending again - my previous reply doesn't seem to have made it to the > list... > > On Tuesday 13 October 2009, Peter Cimerman?i? wrote: > >> I'm trying to model system that is described with few ODEs. >> Function, where ODEs are in, is given as def function(y,t). It >> takes two arguments as you can see. y is an array of different >> species in the model, whereas t is an array of time steps. Then, >> I'd like to calculate steady state using fsolve, which takes >> function with one argument only. When trying to solve steady state, >> this error is raised: "TypeError: There is a mismatch between the >> input and output shape of diff_equations.". How could I solve my >> problem? >> > > Here is a self-contained example of 2 simple ODEs with mass action > kinetics, that calculates a time course with odeint and then uses the > final point of the time course as an initial estimate for a steady- > state calculation with fsolve. As you can see, the arguments of the > function de are handled OK between odeint and fsolve. > > BTW if you are interested in this problem in a more general way, you > might want to look at our PySCeS software (http://pysces.sf.net) which > is for simulation of (bio)chemical kinetic networks. It has high-level > functions to calculate time courses and steady states automatically > (amongst others) and runs on top of scipy. > > Regards > Johann > > ---------------------8-<-------------------------------- > > import scipy > scipy.pkgload('optimize', 'integrate') > import pylab > > k1 = 10 > k2 = 5 > k3 = 8 > > def de(X,t): > Xdot = scipy.zeros((2),'d') > Xdot[0] = k1 - k2*X[0] > Xdot[1] = k2*X[0] - k3*X[1] > return Xdot > > init_val = scipy.array([10.,0.1]) > t_range = scipy.linspace(0,1,21) > > t_course = scipy.integrate.odeint(de, init_val, t_range) > fin_t_course = scipy.copy(t_course[-1]) > > ss = scipy.optimize.fsolve(de, fin_t_course, args=None) > print ss > > pylab.plot(t_range, t_course[:,0], t_range, t_course[:,1]) > pylab.show() > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Tue Oct 13 11:48:42 2009 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 13 Oct 2009 10:48:42 -0500 Subject: [SciPy-User] Problem with combining Fsolve and Integrate.Odeint In-Reply-To: <4AD4A06A.2010200@enthought.com> References: <200910131301.25921.jr@sun.ac.za> <4AD4A06A.2010200@enthought.com> Message-ID: <4AD4A15A.2020506@enthought.com> Whoops. The code that I edited was from Johann, but I should have addressed my response to Peter, since he was the original poster. Warren Warren Weckesser wrote: > Hi Johann, > > You can do what you want by using the `args` argument of both fsolve > and odeint. > > I have rewritten your example to do use `args`, and made several other > changes that are more about my preferences than about getting it > working. In particular, if the differential equations depend on the > parameters k1, k2, and k3, these should really be arguments to the > function, not global variables. Like fsolve, odeint also has an > `args` argument that allows you to do this. > > Heres the code: > > ---------- > > from numpy import empty, array, linspace > from scipy.optimize import fsolve > from scipy.integrate import odeint > import pylab > > > def de(X, t, k1, k2, k3): > Xdot = empty((2),'d') > Xdot[0] = k1 - k2*X[0] > Xdot[1] = k2*X[0] - k3*X[1] > return Xdot > > > if __name__ == "__main__": > k1 = 10 > k2 = 5 > k3 = 8 > > init_val = array([10.,0.1]) > t_range = linspace(0,1,21) > > t_course = odeint(de, init_val, t_range, args=(k1,k2,k3)) > fin_t_course = t_course[-1] > > ss = fsolve(de, fin_t_course, args=(0, k1, k2, k3)) > print ss > > pylab.plot(t_range, t_course[:,0], t_range, t_course[:,1]) > pylab.show() > > ---------- > > Warren > > Johann Rohwer wrote: >> Sending again - my previous reply doesn't seem to have made it to the >> list... >> >> On Tuesday 13 October 2009, Peter Cimerman?i? wrote: >> >>> I'm trying to model system that is described with few ODEs. >>> Function, where ODEs are in, is given as def function(y,t). It >>> takes two arguments as you can see. y is an array of different >>> species in the model, whereas t is an array of time steps. Then, >>> I'd like to calculate steady state using fsolve, which takes >>> function with one argument only. When trying to solve steady state, >>> this error is raised: "TypeError: There is a mismatch between the >>> input and output shape of diff_equations.". How could I solve my >>> problem? >>> >> >> Here is a self-contained example of 2 simple ODEs with mass action >> kinetics, that calculates a time course with odeint and then uses the >> final point of the time course as an initial estimate for a steady- >> state calculation with fsolve. As you can see, the arguments of the >> function de are handled OK between odeint and fsolve. >> >> BTW if you are interested in this problem in a more general way, you >> might want to look at our PySCeS software (http://pysces.sf.net) which >> is for simulation of (bio)chemical kinetic networks. It has high-level >> functions to calculate time courses and steady states automatically >> (amongst others) and runs on top of scipy. >> >> Regards >> Johann >> >> ---------------------8-<-------------------------------- >> >> import scipy >> scipy.pkgload('optimize', 'integrate') >> import pylab >> >> k1 = 10 >> k2 = 5 >> k3 = 8 >> >> def de(X,t): >> Xdot = scipy.zeros((2),'d') >> Xdot[0] = k1 - k2*X[0] >> Xdot[1] = k2*X[0] - k3*X[1] >> return Xdot >> >> init_val = scipy.array([10.,0.1]) >> t_range = scipy.linspace(0,1,21) >> >> t_course = scipy.integrate.odeint(de, init_val, t_range) >> fin_t_course = scipy.copy(t_course[-1]) >> >> ss = scipy.optimize.fsolve(de, fin_t_course, args=None) >> print ss >> >> pylab.plot(t_range, t_course[:,0], t_range, t_course[:,1]) >> pylab.show() >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From tmbdev at gmail.com Tue Oct 13 15:50:24 2009 From: tmbdev at gmail.com (Thomas Breuel) Date: Tue, 13 Oct 2009 21:50:24 +0200 Subject: [SciPy-User] weave type converters for multidimensional C99 arrays? Message-ID: <7e51d15d0910131250h586647e1obff8a57ba96d11f1@mail.gmail.com> Weave currently has type converters that convert NumPy arrays to "PyObject *" and to Blitz arrays. Has anybody implemented type converters that convert NumPy arrays to C99 multidimensional, variable length arrays? Tom From perfreem at gmail.com Tue Oct 13 17:01:52 2009 From: perfreem at gmail.com (per freem) Date: Tue, 13 Oct 2009 17:01:52 -0400 Subject: [SciPy-User] vectorized version of 'multinomial' sampling function Message-ID: hi all, i have a series of probability vector that i'd like to feed into multinomial to get an array of vector outcomes back. for example, given: p = array([[ 0.9 , 0.05, 0.05], [ 0.05, 0.05, 0.9 ]]) i'd like to call multinomial like this: multinomial(1, p) to get a vector of multinomial samplers, each using the nth list in 'p'. something like: array([[1, 0, 0], [0, 0 1]]) in this case. is this possible? it seems like 'multinomial' takes only a one dimensional array. i could write this as a "for" loop of course but i prefer a vectorized version since speed is crucial for me here. thanks very much. From dwf at cs.toronto.edu Tue Oct 13 19:59:41 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 13 Oct 2009 19:59:41 -0400 Subject: [SciPy-User] vectorized version of 'multinomial' sampling function In-Reply-To: References: Message-ID: <6E9F4234-F3FD-4E78-BDC4-D0960FE52242@cs.toronto.edu> On 13-Oct-09, at 5:01 PM, per freem wrote: > hi all, > > i have a series of probability vector that i'd like to feed into > multinomial to get an array of vector outcomes back. for example, > given: > > p = array([[ 0.9 , 0.05, 0.05], > [ 0.05, 0.05, 0.9 ]]) > > i'd like to call multinomial like this: > > multinomial(1, p) > > to get a vector of multinomial samplers, each using the nth list in > 'p'. something like: > > array([[1, 0, 0], [0, 0 1]]) in this case. is this possible? it seems > like 'multinomial' takes only a one dimensional array. i could write > this as a "for" loop of course but i prefer a vectorized version since > speed is crucial for me here. > > thanks very much. Your best bet is probably to copy the pyrex/Cython code for multinomial in numpy/random/mtrand/mtrand.pyx, and add the functionality you want there. If you do it right (i.e. type your loop indices) then it should be fast. David From perfreem at gmail.com Wed Oct 14 00:27:31 2009 From: perfreem at gmail.com (per freem) Date: Wed, 14 Oct 2009 00:27:31 -0400 Subject: [SciPy-User] vectorized version of 'multinomial' sampling function In-Reply-To: <6E9F4234-F3FD-4E78-BDC4-D0960FE52242@cs.toronto.edu> References: <6E9F4234-F3FD-4E78-BDC4-D0960FE52242@cs.toronto.edu> Message-ID: On Tue, Oct 13, 2009 at 7:59 PM, David Warde-Farley wrote: > On 13-Oct-09, at 5:01 PM, per freem wrote: > >> hi all, >> >> i have a series of probability vector that i'd like to feed into >> multinomial to get an array of vector outcomes back. for example, >> given: >> >> p = array([[ 0.9 , ?0.05, ?0.05], >> ? ? ? [ 0.05, ?0.05, ?0.9 ]]) >> >> i'd like to call multinomial like this: >> >> multinomial(1, p) >> >> to get a vector of multinomial samplers, each using the nth list in >> 'p'. something like: >> >> array([[1, 0, 0], [0, 0 1]]) in this case. is this possible? it seems >> like 'multinomial' takes only a one dimensional array. i could write >> this as a "for" loop of course but i prefer a vectorized version since >> speed is crucial for me here. >> >> thanks very much. > > Your best bet is probably to copy the pyrex/Cython code for > multinomial in numpy/random/mtrand/mtrand.pyx, and add the > functionality you want there. ?If you do it right (i.e. type your loop > indices) then it should be fast. > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Hi David thanks for your reply. i am not sure how to do this though -- is the vectorized version i would write in pyrex/cython simply going to iterate through this vector of vectors and do the operation? will that really be efficient? is there some other library that can do vectorized multinomial like i described? i really am not sure how to write this cython. From josef.pktd at gmail.com Wed Oct 14 01:07:24 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 14 Oct 2009 01:07:24 -0400 Subject: [SciPy-User] vectorized version of 'multinomial' sampling function In-Reply-To: References: Message-ID: <1cd32cbb0910132207o772571e7t94e2d9d619fe3593@mail.gmail.com> On Tue, Oct 13, 2009 at 5:01 PM, per freem wrote: > hi all, > > i have a series of probability vector that i'd like to feed into > multinomial to get an array of vector outcomes back. for example, > given: > > p = array([[ 0.9 , ?0.05, ?0.05], > ? ? ? [ 0.05, ?0.05, ?0.9 ]]) > > i'd like to call multinomial like this: > > multinomial(1, p) If you only want n=1, then it seems to me, you could do this also with drawing an integer out of range(3) with the given probabilities. This can be vectorized easily directly with numpy. (just like an individual observation in multinomial logit) Josef > > to get a vector of multinomial samplers, each using the nth list in > 'p'. something like: > > array([[1, 0, 0], [0, 0 1]]) in this case. is this possible? it seems > like 'multinomial' takes only a one dimensional array. i could write > this as a "for" loop of course but i prefer a vectorized version since > speed is crucial for me here. > > thanks very much. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Wed Oct 14 01:41:22 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 14 Oct 2009 01:41:22 -0400 Subject: [SciPy-User] vectorized version of 'multinomial' sampling function In-Reply-To: <1cd32cbb0910132207o772571e7t94e2d9d619fe3593@mail.gmail.com> References: <1cd32cbb0910132207o772571e7t94e2d9d619fe3593@mail.gmail.com> Message-ID: <1cd32cbb0910132241r404bb37am69487b23eeef1ef2@mail.gmail.com> On Wed, Oct 14, 2009 at 1:07 AM, wrote: > On Tue, Oct 13, 2009 at 5:01 PM, per freem wrote: >> hi all, >> >> i have a series of probability vector that i'd like to feed into >> multinomial to get an array of vector outcomes back. for example, >> given: >> >> p = array([[ 0.9 , ?0.05, ?0.05], >> ? ? ? [ 0.05, ?0.05, ?0.9 ]]) >> >> i'd like to call multinomial like this: >> >> multinomial(1, p) > > If you only want n=1, then it seems to me, you could do this also with > drawing an integer out of range(3) with the given probabilities. This > can be vectorized easily directly with numpy. (just like an individual > observation in multinomial logit) > > Josef > >> >> to get a vector of multinomial samplers, each using the nth list in >> 'p'. something like: >> >> array([[1, 0, 0], [0, 0 1]]) in this case. is this possible? it seems >> like 'multinomial' takes only a one dimensional array. i could write >> this as a "for" loop of course but i prefer a vectorized version since >> speed is crucial for me here. for n=1 the following should work (unless I still cannot tell multinomial distribution and multinomial logit apart) Josef >>> p = np.array([[ 0.9 , 0.05, 0.05], [ 0.05, 0.05, 0.9 ]]) >>> pcum = p.cumsum(1) >>> pcuma = np.repeat(pcum,20, 0) >>> rvs = np.random.uniform(size=(40)) >>> a,b,c = pcuma.T >>> mnrvs = np.column_stack((rvs<=a, (rvs>a) & (rvs<=b), rvs>b)).astype(int) >>> mnrvs[:20].sum() 20 >>> mnrvs[:20].sum(0) array([19, 0, 1]) >>> mnrvs[20:].sum() 20 >>> mnrvs[20:].sum(0) array([ 1, 0, 19]) >>> mnrvs[15:25] array([[0, 0, 1], [1, 0, 0], [1, 0, 0], [1, 0, 0], [1, 0, 0], [0, 0, 1], [0, 0, 1], [0, 0, 1], [1, 0, 0], [0, 0, 1]]) check for larger sample: >>> pcuma = np.repeat(pcum,500, 0) >>> rvs = np.random.uniform(size=(1000)) >>> a,b,c = pcuma.T >>> mnrvs = np.column_stack((rvs<=a, (rvs>a) & (rvs<=b), rvs>b)).astype(int) >>> mnrvs[:500].sum(0) array([456, 17, 27]) >>> mnrvs[500:].sum(0) array([ 24, 35, 441]) >>> p*500 array([[ 450., 25., 25.], [ 25., 25., 450.]]) From josef.pktd at gmail.com Wed Oct 14 02:19:29 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 14 Oct 2009 02:19:29 -0400 Subject: [SciPy-User] vectorized version of 'multinomial' sampling function In-Reply-To: <1cd32cbb0910132241r404bb37am69487b23eeef1ef2@mail.gmail.com> References: <1cd32cbb0910132207o772571e7t94e2d9d619fe3593@mail.gmail.com> <1cd32cbb0910132241r404bb37am69487b23eeef1ef2@mail.gmail.com> Message-ID: <1cd32cbb0910132319j428be4daldc4e7b6accaad0c0@mail.gmail.com> On Wed, Oct 14, 2009 at 1:41 AM, wrote: > On Wed, Oct 14, 2009 at 1:07 AM, ? wrote: >> On Tue, Oct 13, 2009 at 5:01 PM, per freem wrote: >>> hi all, >>> >>> i have a series of probability vector that i'd like to feed into >>> multinomial to get an array of vector outcomes back. for example, >>> given: >>> >>> p = array([[ 0.9 , ?0.05, ?0.05], >>> ? ? ? [ 0.05, ?0.05, ?0.9 ]]) >>> >>> i'd like to call multinomial like this: >>> >>> multinomial(1, p) >> >> If you only want n=1, then it seems to me, you could do this also with >> drawing an integer out of range(3) with the given probabilities. This >> can be vectorized easily directly with numpy. (just like an individual >> observation in multinomial logit) >> >> Josef >> >>> >>> to get a vector of multinomial samplers, each using the nth list in >>> 'p'. something like: >>> >>> array([[1, 0, 0], [0, 0 1]]) in this case. is this possible? it seems >>> like 'multinomial' takes only a one dimensional array. i could write >>> this as a "for" loop of course but i prefer a vectorized version since >>> speed is crucial for me here. > > > for n=1 the following should work (unless I still cannot tell > multinomial distribution and multinomial logit apart) > > Josef > >>>> p = np.array([[ 0.9 , ?0.05, ?0.05], > ? ? ? [ 0.05, ?0.05, ?0.9 ]]) >>>> pcum = p.cumsum(1) >>>> pcuma = np.repeat(pcum,20, 0) >>>> rvs = np.random.uniform(size=(40)) >>>> a,b,c = pcuma.T >>>> mnrvs = np.column_stack((rvs<=a, (rvs>a) & (rvs<=b), rvs>b)).astype(int) >>>> mnrvs[:20].sum() > 20 >>>> mnrvs[:20].sum(0) > array([19, ?0, ?1]) >>>> mnrvs[20:].sum() > 20 >>>> mnrvs[20:].sum(0) > array([ 1, ?0, 19]) >>>> mnrvs[15:25] > array([[0, 0, 1], > ? ? ? [1, 0, 0], > ? ? ? [1, 0, 0], > ? ? ? [1, 0, 0], > ? ? ? [1, 0, 0], > ? ? ? [0, 0, 1], > ? ? ? [0, 0, 1], > ? ? ? [0, 0, 1], > ? ? ? [1, 0, 0], > ? ? ? [0, 0, 1]]) > > > check for larger sample: > >>>> pcuma = np.repeat(pcum,500, 0) >>>> rvs = np.random.uniform(size=(1000)) >>>> a,b,c = pcuma.T >>>> mnrvs = np.column_stack((rvs<=a, (rvs>a) & (rvs<=b), rvs>b)).astype(int) >>>> mnrvs[:500].sum(0) > array([456, ?17, ?27]) >>>> mnrvs[500:].sum(0) > array([ 24, ?35, 441]) >>>> p*500 > array([[ 450., ? 25., ? 25.], > ? ? ? [ ?25., ? 25., ?450.]]) > similar works for e.g. n=5 Josef >>> mnn = mnrvs.reshape(-1,5,3).sum(1) >>> mnn[95:105] array([[4, 0, 1], [4, 0, 1], [5, 0, 0], [4, 0, 1], [4, 1, 0], [0, 0, 5], [1, 0, 4], [1, 0, 4], [0, 0, 5], [0, 0, 5]]) >>> mnn[:100].sum(0) array([456, 17, 27]) >>> mnn[100:].sum(0) array([ 24, 35, 441]) >>> np.bincount(mnn[:100,0]) array([ 0, 0, 1, 8, 25, 66]) >>> np.bincount(mnn[:100,1]) array([85, 13, 2]) >>> np.bincount(mnn[:100,2]) array([78, 18, 3, 1]) compare with np.random >>> rvsmn5 = np.random.multinomial(5,p[0],size=10000) >>> rvsmn5.sum(0)/100. array([ 450.92, 24.23, 24.85]) >>> np.bincount(rvsmn5[:,0]) array([ 0, 3, 75, 740, 3191, 5991]) >>> np.bincount(rvsmn5[:,1])/100. array([ 77.98, 19.89, 2.05, 0.08]) >>> np.bincount(rvsmn5[:,2])/100. array([ 77.5 , 20.28, 2.09, 0.13]) From markbak at gmail.com Wed Oct 14 04:22:21 2009 From: markbak at gmail.com (Mark Bakker) Date: Wed, 14 Oct 2009 10:22:21 +0200 Subject: [SciPy-User] specify lognormal distribution with mu and sigma using scipy.stats Message-ID: <6946b9500910140122l4473d801s431d304cf20bca41@mail.gmail.com> Hello list, I am having trouble creating a lognormal distribution with known mean mu and standard deviation sigma using scipy.stats According to the docs, the programmed function is: lognorm.pdf(x,s) = 1/(s*x*sqrt(2*pi)) * exp(-1/2*(log(x)/s)**2) So s is the standard deviation. But how do I specify the mean? I found some information that when you specify loc and scale, you replace x by (x-loc)/scale But in the lognormal distribution, you want to replace log(x) by log(x)-loc where loc is mu. How do I do that? In addition, would it be a good idea to create some convenience functions that allow you to simply create lognormal (and maybe normal) distributions by specifying the more common mu and sigma? That would surely make things more userfriendly. Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From markbak at gmail.com Wed Oct 14 06:12:22 2009 From: markbak at gmail.com (Mark Bakker) Date: Wed, 14 Oct 2009 12:12:22 +0200 Subject: [SciPy-User] problem with computing moments of normal distribution Message-ID: <6946b9500910140312p30357769rd86bcc4af8841465@mail.gmail.com> Hello List, I seem to have trouble computing the moments of a normal distribution when the 'loc' keyword (I know, that's the mean in this case) is specified.Any ideas? Here's my output: In [207]: f = scipy.stats.norm() In [208]: f.moment(1) # works Out[208]: 0.0 In [209]: f = scipy.stats.norm(loc=1) In [210]: f.moment(1) # doesn't work --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /Users/mark/models/whpa/brad/timeseriesmodel.py in () ----> 1 2 3 4 5 /Library/Frameworks/Python.framework/Versions/5.0.0/lib/python2.5/site-packages/scipy/stats/distributions.py in moment(self, n) 131 return self.dist.stats(*self.args,**kwds) 132 def moment(self,n): --> 133 return self.dist.moment(n,*self.args,**self.kwds) 134 def entropy(self): 135 return self.dist.entropy(*self.args,**self.kwds) TypeError: moment() got an unexpected keyword argument 'loc' -------------- next part -------------- An HTML attachment was scrubbed... URL: From yosefmel at post.tau.ac.il Wed Oct 14 08:02:52 2009 From: yosefmel at post.tau.ac.il (Yosef Meller) Date: Wed, 14 Oct 2009 14:02:52 +0200 Subject: [SciPy-User] =?utf-8?q?idiom_for_iterators=2C_expr=28T=29_if_issc?= =?utf-8?b?YWxhcihUKSBlbHNlIGFycmF5KFtleHByKHQpIGZvciB0IAlpbiBUXSk=?= In-Reply-To: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> Message-ID: <200910141402.52413.yosefmel@post.tau.ac.il> On ??? ??? 12 ??????? 2009 16:42:21 denis wrote: > Some vectors are iterators: I want > expr(T) if isscalar(T) \ > else array([expr(t) for t in T]) > Is there an idiom for this ? > > One can > try: > expr(T) > except ValueError: > # ValueError: shape mismatch: objects cannot be broadcast to a > single shape > array([ expr(t) for t in T ]) > > but this is ugly, and probably wrong in some cases even with no > ValueError > I'm afraid the broadcasting rules are just too complex for me / > I'm too simple for the rules. v_expr = numpy.vectorize(expr) v_expr(T) Is that what you wanted? From josef.pktd at gmail.com Wed Oct 14 08:43:16 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 14 Oct 2009 08:43:16 -0400 Subject: [SciPy-User] problem with computing moments of normal distribution In-Reply-To: <6946b9500910140312p30357769rd86bcc4af8841465@mail.gmail.com> References: <6946b9500910140312p30357769rd86bcc4af8841465@mail.gmail.com> Message-ID: <1cd32cbb0910140543i295c9a8cqaf9e5c1e1399f7a5@mail.gmail.com> On Wed, Oct 14, 2009 at 6:12 AM, Mark Bakker wrote: > Hello List, I seem to have trouble computing the moments of a normal > distribution when the 'loc' keyword (I know, that's the mean in this case) > is specified. > Any ideas? Here's my output: > > In [207]: f = scipy.stats.norm() > In [208]: f.moment(1) # works > Out[208]: 0.0 > In [209]: f = scipy.stats.norm(loc=1) > In [210]: f.moment(1) # doesn't work > --------------------------------------------------------------------------- > TypeError ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Traceback (most recent call last) > /Users/mark/models/whpa/brad/timeseriesmodel.py in () > ----> 1 > ?? ? ?2 > ?? ? ?3 > ?? ? ?4 > ?? ? ?5 > /Library/Frameworks/Python.framework/Versions/5.0.0/lib/python2.5/site-packages/scipy/stats/distributions.py > in moment(self, n) > ?? ?131 ? ? ? ? return self.dist.stats(*self.args,**kwds) > ?? ?132 ? ? def moment(self,n): > --> 133 ? ? ? ? return self.dist.moment(n,*self.args,**self.kwds) > ?? ?134 ? ? def entropy(self): > ?? ?135 ? ? ? ? return self.dist.entropy(*self.args,**self.kwds) > TypeError: moment() got an unexpected keyword argument 'loc' > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > The current moment method does not support loc and scale, only the standard distribution. def moment(self, n, *args): """ n'th order non-central moment of distribution Parameters ---------- n: int, n>=1 order of moment arg1, arg2, arg3,... : array-like The shape parameter(s) for the distribution (see docstring of the instance object for more information) """ You can get mean, variance, skew and kurtosis through norm.stats >>> stats.norm.stats(1, loc=5, scale=2, moments = 'mvsk') (array(5.0), array(4.0), array(0.0), array(0.0)) It is possible to recover the central and non-central first four moments from mvsk ( I have some helper function for this somewhere). If you are just interested in the first two moments, then the translation is very short. However, moments higher than 2, including skew and kurtosis, are not fully bugfixed. I know some distributions have wrong higher moments (ncf?), but I don't have yet a generic test function to verify the values for all distributions, and it is slow to find a reference and compare the wrong formulas. Some other problems with moments (and stats) are for cases where the variance is infinite. You could file an enhancement ticket for `moment`. Without looking up some references, I don't know how loc and scale will affect the non-central moments higher than the second. If you can work this out, then it will be sooner that the enhancement of `moment` to handle loc and scale will get into scipy. I'm sorry for not having a more positive answer, there are still a few gaps in stats.distributions. Josef From mhorstma at uni-bonn.de Wed Oct 14 07:53:07 2009 From: mhorstma at uni-bonn.de (Marie-Therese Horstmann) Date: Wed, 14 Oct 2009 13:53:07 +0200 Subject: [SciPy-User] Arrows in polar plot at zero degree Message-ID: Hello everybody, I currently experience some problem with arrows in polar plots. Everything is fine, as long as the arrow does not cross the zero line. Here is an example from the matplotlib gallery (http://matplotlib.sourceforge.net/examples/pylab_examples/polar_demo.html), but with an arrow pointing at 45? outward. To create the arrow I just added the following line: arr = plt.arrow(45, 0.5, 0,1 , alpha = 0.5, width = 0.1, edgecolor = 'black', facecolor = 'green',lw = 2) You can find the complete source code at the end of the mail. If I want to point the arrow in zero direction, arr = plt.arrow(0, 0.5, 0,1 , alpha = 0.5, width = 0.1, edgecolor = 'black', facecolor = 'green',lw = 2) there is only the silhouette of an arrow visible, but nearly everthing seems to be green (as the arrow should be). For me it seems as there are some problems with the periodicity in polar plots. Does anyone have an idea or a workaround? Thank you very much in advance Marie-Therese --------------------------------------------------------------------------- Source code to reproduce the zero direction "arrow" import matplotlib import numpy as np from matplotlib.pyplot import figure, show, rc, grid # radar green, solid grid lines rc('grid', color='#316931', linewidth=1, linestyle='-') rc('xtick', labelsize=15) rc('ytick', labelsize=15) # force square figure and square axes looks better for polar, IMO width, height = matplotlib.rcParams['figure.figsize'] size = min(width, height) # make a square figure fig = figure(figsize=(size, size)) ax = fig.add_axes([0.1, 0.1, 0.8, 0.8], polar=True, axisbg='#d5de9c') r = np.arange(0, 3.0, 0.01) theta = 2*np.pi*r ax.plot(theta, r, color='#ee8d18', lw=3) ax.set_rmax(2.0) grid(True) ax.set_title("And there was much rejoicing!", fontsize=20) #This is the line I added: arr = plt.arrow(0, 0.5, 0,1 , alpha = 0.5, width = 0.1, edgecolor = 'black', facecolor = 'green',lw = 2) show() From josef.pktd at gmail.com Wed Oct 14 09:20:33 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 14 Oct 2009 09:20:33 -0400 Subject: [SciPy-User] specify lognormal distribution with mu and sigma using scipy.stats In-Reply-To: <6946b9500910140122l4473d801s431d304cf20bca41@mail.gmail.com> References: <6946b9500910140122l4473d801s431d304cf20bca41@mail.gmail.com> Message-ID: <1cd32cbb0910140620j1c3a8a70p5226359576406128@mail.gmail.com> On Wed, Oct 14, 2009 at 4:22 AM, Mark Bakker wrote: > Hello list, > I am having trouble creating a lognormal distribution with known mean mu and > standard deviation sigma using scipy.stats > According to the docs, the programmed function is: > lognorm.pdf(x,s) = 1/(s*x*sqrt(2*pi)) * exp(-1/2*(log(x)/s)**2) > So s is the standard deviation. But how do I specify the mean? I found some > information that when you specify loc and scale, you replace x by > (x-loc)/scale > But in the lognormal distribution, you want to replace log(x) by log(x)-loc > where loc is mu. How do I do that? In addition, would it be a good idea to > create some convenience functions that allow you to simply create lognormal > (and maybe normal) distributions by specifying the more common mu and sigma? > That would surely make things more userfriendly. > Thanks, > Mark I don't think loc of lognorm makes much sense in most application, since it is just shifting the support, lower boundary is zero+loc. The loc of the underlying normal distribution enters through the scale. see also http://en.wikipedia.org/wiki/Log-normal_distribution#Mean_and_standard_deviation >>> print stats.lognorm.extradoc Lognormal distribution lognorm.pdf(x,s) = 1/(s*x*sqrt(2*pi)) * exp(-1/2*(log(x)/s)**2) for x > 0, s > 0. If log x is normally distributed with mean mu and variance sigma**2, then x is log-normally distributed with shape paramter sigma and scale parameter exp(mu). roundtrip with mean mu of the underlying normal distribution (scale=1): >>> mu=np.arange(5) >>> np.log(stats.lognorm.stats(1, loc=0,scale=np.exp(mu))[0])-0.5 array([ 0., 1., 2., 3., 4.]) corresponding means of lognormal distribution >>> stats.lognorm.stats(1, loc=0,scale=np.exp(mu))[0] array([ 1.64872127, 4.48168907, 12.18249396, 33.11545196, 90.0171313 ]) shifting support: >>> stats.lognorm.a 0.0 >>> stats.lognorm.ppf([0, 0.5, 1], 1, loc=3,scale=1) array([ 3., 4., Inf]) The only case that I know for lognormal is in regression, so I'm not sure what you mean by the convenience functions. (the normal distribution is defined by loc=mean, scale=standard deviation) assume the regression equation is y = x*beta*exp(u) u distributed normal(0, sigma^2) this implies ln y = ln(x*beta) + u which is just a standard linear regression equation which can be estimated by ols or mle exp(u) in this case is lognormal distributed Josef From josef.pktd at gmail.com Wed Oct 14 09:42:20 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 14 Oct 2009 09:42:20 -0400 Subject: [SciPy-User] specify lognormal distribution with mu and sigma using scipy.stats In-Reply-To: <1cd32cbb0910140620j1c3a8a70p5226359576406128@mail.gmail.com> References: <6946b9500910140122l4473d801s431d304cf20bca41@mail.gmail.com> <1cd32cbb0910140620j1c3a8a70p5226359576406128@mail.gmail.com> Message-ID: <1cd32cbb0910140642w4dabc8c7x64451f3ea4dc5c89@mail.gmail.com> On Wed, Oct 14, 2009 at 9:20 AM, wrote: > On Wed, Oct 14, 2009 at 4:22 AM, Mark Bakker wrote: >> Hello list, >> I am having trouble creating a lognormal distribution with known mean mu and >> standard deviation sigma using scipy.stats >> According to the docs, the programmed function is: >> lognorm.pdf(x,s) = 1/(s*x*sqrt(2*pi)) * exp(-1/2*(log(x)/s)**2) >> So s is the standard deviation. But how do I specify the mean? I found some >> information that when you specify loc and scale, you replace x by >> (x-loc)/scale >> But in the lognormal distribution, you want to replace log(x) by log(x)-loc >> where loc is mu. How do I do that? In addition, would it be a good idea to >> create some convenience functions that allow you to simply create lognormal >> (and maybe normal) distributions by specifying the more common mu and sigma? >> That would surely make things more userfriendly. >> Thanks, >> Mark > > I don't think loc of lognorm makes much sense in most application, > since it is just shifting the support, lower boundary is zero+loc. The > loc of the underlying normal distribution enters through the scale. > > see also http://en.wikipedia.org/wiki/Log-normal_distribution#Mean_and_standard_deviation > > >>>> print stats.lognorm.extradoc > > > Lognormal distribution > > lognorm.pdf(x,s) = 1/(s*x*sqrt(2*pi)) * exp(-1/2*(log(x)/s)**2) > for x > 0, s > 0. > > If log x is normally distributed with mean mu and variance sigma**2, > then x is log-normally distributed with shape paramter sigma and scale > parameter exp(mu). > > > roundtrip with mean mu of the underlying normal distribution (scale=1): > >>>> mu=np.arange(5) >>>> np.log(stats.lognorm.stats(1, loc=0,scale=np.exp(mu))[0])-0.5 > array([ 0., ?1., ?2., ?3., ?4.]) > > corresponding means of lognormal distribution > >>>> stats.lognorm.stats(1, loc=0,scale=np.exp(mu))[0] > array([ ?1.64872127, ? 4.48168907, ?12.18249396, ?33.11545196, ?90.0171313 ]) > > > shifting support: > >>>> stats.lognorm.a > 0.0 >>>> stats.lognorm.ppf([0, 0.5, 1], 1, loc=3,scale=1) > array([ ?3., ? 4., ?Inf]) > > > The only case that I know for lognormal is in regression, so I'm not > sure what you mean by the convenience functions. > (the normal distribution is defined by loc=mean, scale=standard deviation) > > assume the regression equation is > y = x*beta*exp(u) ? ?u distributed normal(0, sigma^2) > this implies > ln y = ln(x*beta) + u ? which is just a standard linear regression > equation which can be estimated by ols or mle I think, I don't remember this part correctly, I just realized that the regression equation would be non-linear in parameters. Josef > > exp(u) in this case is lognormal distributed > > Josef > From jsseabold at gmail.com Wed Oct 14 11:02:14 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Wed, 14 Oct 2009 11:02:14 -0400 Subject: [SciPy-User] Suggestion for numpy.genfromtxt documentation In-Reply-To: <615F65EF-B694-4BD2-8E1E-2F956CF1174E@gmail.com> References: <4ACC6CA1.63BA.009B.0@twdb.state.tx.us> <4ACCE9F2.4090002@gmail.com> <4AD2F348.63BA.009B.0@twdb.state.tx.us> <4AD34E3F.8010107@gmail.com> <615F65EF-B694-4BD2-8E1E-2F956CF1174E@gmail.com> Message-ID: On Mon, Oct 12, 2009 at 12:56 PM, Pierre GM wrote: > All, > You'll be happy to learn that I am working on some additional docs for > genfromtxt. I introduced some new features (`filling_values`, > `skip_footer`), deprecated some others (`missing` should be > `missing_values`, `skip_row` should be `skip_header`), modified how > some of the arguments worked (`missing_values` doesn't have to be a > dictionary any more), to such an extent that it may get overwhelming > for a new user. Moreover, trying to fit every single ?bit of > information in the docstring of the function is maybe a tad counter- > productive. > So, expect some kind of draft in a few days. I'm not sure where to put > it yet (probably right in `numpy/doc`) nor to how to point to it. I'll > let our doc specialists decide what to do. > Cheers > P. Good news! I'm looking forward to seeing what you've done and possibly contributing to lengthier documentation. One thing that I was thinking, and I think Dharhas' comments underline this is that for users (ones looking for some kind of statistical package at least), the first thing you want to do is get your data in and start playing. That's how I operate, at least. "Shoot first, RTM later." So I'm glad the io functions are getting some attention, and that it's looking like it's now not totally necessary to understand the subtleties of constructing a dtype to have a first go. Cheers, Skipper From robert.kern at gmail.com Wed Oct 14 11:18:25 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 14 Oct 2009 10:18:25 -0500 Subject: [SciPy-User] Arrows in polar plot at zero degree In-Reply-To: References: Message-ID: <3d375d730910140818w4f3a361dgdce90f99882f76f6@mail.gmail.com> On Wed, Oct 14, 2009 at 06:53, Marie-Therese Horstmann wrote: > Hello everybody, > > I currently experience some problem with arrows in polar > plots. The matplotlib list is over here: https://lists.sourceforge.net/lists/listinfo/matplotlib-users -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From denis-bz-gg at t-online.de Wed Oct 14 13:23:47 2009 From: denis-bz-gg at t-online.de (denis) Date: Wed, 14 Oct 2009 10:23:47 -0700 (PDT) Subject: [SciPy-User] idiom for iterators, expr(T) if isscalar(T) else array([expr(t) for t in T]) In-Reply-To: <200910141402.52413.yosefmel@post.tau.ac.il> References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> <200910141402.52413.yosefmel@post.tau.ac.il> Message-ID: On Oct 14, 2:02 pm, Yosef Meller wrote: > v_expr = numpy.vectorize(expr) > v_expr(T) > > Is that what you wanted? Yosef, thanks, the right direction -- there must be a numpy primitive for this. But 2 problems with vectorize: 1) an optional arg => TypeError: __call__() got an unexpected keyword argument 'h' 2) vectorize => broadcasting => ValueError Here's a test case: ugly, but funciter() is at least correct :) """ funciter, vectorize ? 14oct """ import numpy as np def spline_2p2s( t, p0, p1, m0, m1, h=1 ): """ Hermite 2-point, 2-slope spline t: a scalar / range / iterator p0 p1 m0 m1: scalars or arrays Beware: t and p0 both vecs => broadcasting => ValueError: shape mismatch: objects cannot be broadcast to a single shape (need guidelines, axioms on broadcasting) """ def f(t): t2 = t*t t3 = t2*t return ( p0 * (2*t3 - 3*t2 + 1) + p1 * (-2*t3 + 3*t2) + m0 * h * (t3 - 2*t2 + t) + m1 * h * (t3 - t2) ) return funciter( f, t ) def funciter( f, T ): return f(T) if np.isscalar(T) \ else np.array([ f(t) for t in T ]) #............................................................................... if __name__ == "__main__": t = np.arange( 0, 1.01, .1 ) p0 = np.array(( 0, 0 )) p1 = np.array(( 1, 0 )) m0 = np.array(( 1, 1 )) m1 = np.array(( 1, -1 )) s = spline_2p2s( t, p0, p1, m0, m1 ) print "spline_2p2s", s.T spline_2p2s_vec = np.vectorize( spline_2p2s ) s = spline_2p2s_vec( t, p0, p1, m0, m1 ) print "spline_2p2s_vec", s.T From gustaf at laserpanda.com Wed Oct 14 13:47:24 2009 From: gustaf at laserpanda.com (Gustaf Nilsson) Date: Wed, 14 Oct 2009 18:47:24 +0100 Subject: [SciPy-User] a rhetorical question on multithreading.. Message-ID: Hi I guess if it was a good idea, then it wouldve been done already but... As the numpy/scipy underlying parts are written in C (?), wouldnt it make more sense to make them multithreaded by themselves, instead of having us hack around it (and GIL) in python? Or have i misunderstood? Gusty -- ? ? ? ? ? ? ? ? ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Oct 14 15:40:54 2009 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 14 Oct 2009 14:40:54 -0500 Subject: [SciPy-User] a rhetorical question on multithreading.. In-Reply-To: References: Message-ID: <3d375d730910141240q3cd5bd05pdd2c7b9ef911721b@mail.gmail.com> On Wed, Oct 14, 2009 at 12:47, Gustaf Nilsson wrote: > Hi > I guess if it was a good idea, then it wouldve been done already but... > As the numpy/scipy underlying parts are written in C (?), wouldnt it make > more sense to make them multithreaded by themselves, instead of having us > hack around it (and GIL) in python? > Or have i misunderstood? Making something multithreaded isn't really a specific thing. There are many strategies one can apply and some of them involve multithreading. Unfortunately, many of those strategies also need tuning for the particular platform, which is something that is difficult to do for a general library like numpy. Additionally, some of the most promising parallelism strategies for your code operate at a higher level than the C-coded operations in numpy; just making the ufuncs and C-coded routines multithreaded gives you a smaller win making the effort less worthwhile. Furthermore, at the C level, threading introduces platform-specific code that make things more difficult. Which isn't to say that we haven't tried. Take a look at the multicore/ branch: http://svn.scipy.org/svn/numpy/branches/multicore/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From perfreem at gmail.com Wed Oct 14 23:05:34 2009 From: perfreem at gmail.com (per freem) Date: Wed, 14 Oct 2009 23:05:34 -0400 Subject: [SciPy-User] vectorized version of 'multinomial' sampling function In-Reply-To: <1cd32cbb0910132319j428be4daldc4e7b6accaad0c0@mail.gmail.com> References: <1cd32cbb0910132207o772571e7t94e2d9d619fe3593@mail.gmail.com> <1cd32cbb0910132241r404bb37am69487b23eeef1ef2@mail.gmail.com> <1cd32cbb0910132319j428be4daldc4e7b6accaad0c0@mail.gmail.com> Message-ID: wrote: >>> If you only want n=1, then it seems to me, you could do this also with >>> drawing an integer out of range(3) with the given probabilities. This >>> can be vectorized easily directly with numpy. (just like an individual >>> observation in multinomial logit) >>> >>> Josef >>> >>>> >>>> to get a vector of multinomial samplers, each using the nth list in >>>> 'p'. something like: >>>> >>>> array([[1, 0, 0], [0, 0 1]]) in this case. is this possible? it seems >>>> like 'multinomial' takes only a one dimensional array. i could write >>>> this as a "for" loop of course but i prefer a vectorized version since >>>> speed is crucial for me here. >> >> >> for n=1 the following should work (unless I still cannot tell >> multinomial distribution and multinomial logit apart) >> >> Josef >> >>>>> p = np.array([[ 0.9 , ?0.05, ?0.05], >> ? ? ? [ 0.05, ?0.05, ?0.9 ]]) >>>>> pcum = p.cumsum(1) >>>>> pcuma = np.repeat(pcum,20, 0) >>>>> rvs = np.random.uniform(size=(40)) >>>>> a,b,c = pcuma.T >>>>> mnrvs = np.column_stack((rvs<=a, (rvs>a) & (rvs<=b), rvs>b)).astype(int) >>>>> mnrvs[:20].sum() >> 20 >>>>> mnrvs[:20].sum(0) >> array([19, ?0, ?1]) >>>>> mnrvs[20:].sum() >> 20 >>>>> mnrvs[20:].sum(0) >> array([ 1, ?0, 19]) >>>>> mnrvs[15:25] >> array([[0, 0, 1], >> ? ? ? [1, 0, 0], >> ? ? ? [1, 0, 0], >> ? ? ? [1, 0, 0], >> ? ? ? [1, 0, 0], >> ? ? ? [0, 0, 1], >> ? ? ? [0, 0, 1], >> ? ? ? [0, 0, 1], >> ? ? ? [1, 0, 0], >> ? ? ? [0, 0, 1]]) >> >> >> check for larger sample: >> >>>>> pcuma = np.repeat(pcum,500, 0) >>>>> rvs = np.random.uniform(size=(1000)) >>>>> a,b,c = pcuma.T >>>>> mnrvs = np.column_stack((rvs<=a, (rvs>a) & (rvs<=b), rvs>b)).astype(int) >>>>> mnrvs[:500].sum(0) >> array([456, ?17, ?27]) >>>>> mnrvs[500:].sum(0) >> array([ 24, ?35, 441]) >>>>> p*500 >> array([[ 450., ? 25., ? 25.], >> ? ? ? [ ?25., ? 25., ?450.]]) >> > > similar works for e.g. n=5 > > Josef hi Josef, thanks very much for your reply. i see how this works for the specific case where the multinomial vector of probabilities has 3 numbers in it, for which you defined the corresponding variables "a", "b" and "c". but is there a way to rewrite this code to make it work for an arbitrarily sized vector of probabilities? for example, i'm not sure how one would generalize this line: mnrvs = np.column_stack((rvs<=a, (rvs>a) & (rvs<=b), rvs>b)).astype(int) any ideas on how to do this would be greatly appreciated. thanks again. From josef.pktd at gmail.com Thu Oct 15 00:07:01 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 15 Oct 2009 00:07:01 -0400 Subject: [SciPy-User] vectorized version of 'multinomial' sampling function In-Reply-To: References: <1cd32cbb0910132207o772571e7t94e2d9d619fe3593@mail.gmail.com> <1cd32cbb0910132241r404bb37am69487b23eeef1ef2@mail.gmail.com> <1cd32cbb0910132319j428be4daldc4e7b6accaad0c0@mail.gmail.com> Message-ID: <1cd32cbb0910142107q4db750a2xb90e5f11e63acda8@mail.gmail.com> On Wed, Oct 14, 2009 at 11:05 PM, per freem wrote: > wrote: > > >>>> If you only want n=1, then it seems to me, you could do this also with >>>> drawing an integer out of range(3) with the given probabilities. This >>>> can be vectorized easily directly with numpy. (just like an individual >>>> observation in multinomial logit) >>>> >>>> Josef >>>> >>>>> >>>>> to get a vector of multinomial samplers, each using the nth list in >>>>> 'p'. something like: >>>>> >>>>> array([[1, 0, 0], [0, 0 1]]) in this case. is this possible? it seems >>>>> like 'multinomial' takes only a one dimensional array. i could write >>>>> this as a "for" loop of course but i prefer a vectorized version since >>>>> speed is crucial for me here. >>> >>> >>> for n=1 the following should work (unless I still cannot tell >>> multinomial distribution and multinomial logit apart) >>> >>> Josef >>> >>>>>> p = np.array([[ 0.9 , ?0.05, ?0.05], >>> ? ? ? [ 0.05, ?0.05, ?0.9 ]]) >>>>>> pcum = p.cumsum(1) >>>>>> pcuma = np.repeat(pcum,20, 0) >>>>>> rvs = np.random.uniform(size=(40)) >>>>>> a,b,c = pcuma.T >>>>>> mnrvs = np.column_stack((rvs<=a, (rvs>a) & (rvs<=b), rvs>b)).astype(int) >>>>>> mnrvs[:20].sum() >>> 20 >>>>>> mnrvs[:20].sum(0) >>> array([19, ?0, ?1]) >>>>>> mnrvs[20:].sum() >>> 20 >>>>>> mnrvs[20:].sum(0) >>> array([ 1, ?0, 19]) >>>>>> mnrvs[15:25] >>> array([[0, 0, 1], >>> ? ? ? [1, 0, 0], >>> ? ? ? [1, 0, 0], >>> ? ? ? [1, 0, 0], >>> ? ? ? [1, 0, 0], >>> ? ? ? [0, 0, 1], >>> ? ? ? [0, 0, 1], >>> ? ? ? [0, 0, 1], >>> ? ? ? [1, 0, 0], >>> ? ? ? [0, 0, 1]]) >>> >>> >>> check for larger sample: >>> >>>>>> pcuma = np.repeat(pcum,500, 0) >>>>>> rvs = np.random.uniform(size=(1000)) >>>>>> a,b,c = pcuma.T >>>>>> mnrvs = np.column_stack((rvs<=a, (rvs>a) & (rvs<=b), rvs>b)).astype(int) >>>>>> mnrvs[:500].sum(0) >>> array([456, ?17, ?27]) >>>>>> mnrvs[500:].sum(0) >>> array([ 24, ?35, 441]) >>>>>> p*500 >>> array([[ 450., ? 25., ? 25.], >>> ? ? ? [ ?25., ? 25., ?450.]]) >>> >> >> similar works for e.g. n=5 >> >> Josef > > > hi Josef, > > thanks very much for your reply. i see how this works for the specific > case where the multinomial vector of probabilities has 3 numbers in > it, for which you defined the corresponding variables "a", "b" and > "c". but is there a way to rewrite this code to make it work for an > arbitrarily sized vector of probabilities? for example, i'm not sure > how one would generalize this line: > > mnrvs = np.column_stack((rvs<=a, (rvs>a) & (rvs<=b), rvs>b)).astype(int) > > any ideas on how to do this would be greatly appreciated. ?thanks again. > Here is what I used for the multinomial logit case rvsunif = np.random.rand(nobs,1) yrvs = (rvsunif>> dummyvar = (yrvs == np.arange(nk)).astype(int) >>> dummyvar[:10,:] array([[1, 0, 0], [0, 0, 1], [1, 0, 0], [0, 1, 0], [0, 1, 0], [0, 1, 0], [1, 0, 0], [0, 1, 0], [1, 0, 0], [0, 0, 1]]) Hope that helps, Josef From yosefmel at post.tau.ac.il Thu Oct 15 03:30:43 2009 From: yosefmel at post.tau.ac.il (Yosef Meller) Date: Thu, 15 Oct 2009 09:30:43 +0200 Subject: [SciPy-User] =?utf-8?q?idiom_for_iterators=2C_expr=28T=29_if_issc?= =?utf-8?b?YWxhcihUKSBlbHNlIGFycmF5KFtleHByKHQpIAlmb3IgdCAgaW4gVF0p?= In-Reply-To: References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> <200910141402.52413.yosefmel@post.tau.ac.il> Message-ID: <200910150930.43980.yosefmel@post.tau.ac.il> On ??? ????? 14 ??????? 2009 19:23:47 denis wrote: > On Oct 14, 2:02 pm, Yosef Meller wrote: > > v_expr = numpy.vectorize(expr) > > v_expr(T) > > > > Is that what you wanted? > > Yosef, > thanks, the right direction -- there must be a numpy primitive for > this. > But 2 problems with vectorize: > 1) an optional arg => TypeError: __call__() got an unexpected keyword > argument 'h' > 2) vectorize => broadcasting => ValueError > Here's a test case: ugly, but funciter() is at least correct :) > > > """ funciter, vectorize ? 14oct """ > import numpy as np > > def spline_2p2s( t, p0, p1, m0, m1, h=1 ): > """ Hermite 2-point, 2-slope spline > t: a scalar / range / iterator > p0 p1 m0 m1: scalars or arrays > Beware: t and p0 both vecs => broadcasting => > ValueError: shape mismatch: objects cannot be broadcast to > a single shape > (need guidelines, axioms on broadcasting) > """ > def f(t): > t2 = t*t > t3 = t2*t > return ( > p0 * (2*t3 - 3*t2 + 1) > + p1 * (-2*t3 + 3*t2) > + m0 * h * (t3 - 2*t2 + t) > + m1 * h * (t3 - t2) ) > return funciter( f, t ) > > def funciter( f, T ): > return f(T) if np.isscalar(T) \ > else np.array([ f(t) for t in T ]) > > #.......................................................................... > ..... if __name__ == "__main__": > t = np.arange( 0, 1.01, .1 ) > p0 = np.array(( 0, 0 )) > p1 = np.array(( 1, 0 )) > m0 = np.array(( 1, 1 )) > m1 = np.array(( 1, -1 )) > > s = spline_2p2s( t, p0, p1, m0, m1 ) > print "spline_2p2s", s.T > > spline_2p2s_vec = np.vectorize( spline_2p2s ) > s = spline_2p2s_vec( t, p0, p1, m0, m1 ) > print "spline_2p2s_vec", s.T First of all, the candidate for vectorization is f(t) inside spline_2p2s, not spline_2p2s itself. Second, since for each t you may get multiple results based on the size of pi, mi, you might want to check frompyfunc() instead of vectorize(). Most important, if broadcasting is hard for you, you might still save a lot of time by using np.multiply.outer() instead of vectorizing an inner function. outer() returns the operation's result on each possible pair of its two arguments. See if that works for you. Yours, yosef. From nicoletti at consorzio-innova.it Thu Oct 15 04:33:08 2009 From: nicoletti at consorzio-innova.it (Marco Nicoletti) Date: Thu, 15 Oct 2009 10:33:08 +0200 Subject: [SciPy-User] Bug in interpolate.polyint.py and fix Message-ID: Dear all, I guess I found a bug in the module scipy.interpolate.polyint.py. If you run the following lines: #-----------krogh.py---------------# from scipy import interpolate import numpy # Krogh example xi = [0,3,4,10] yi = [10,3,4,20] krogh = interpolate.KroghInterpolator(xi, yi) # Evaluate in one single point tmp = 0.5 tmp = numpy.asarray(tmp) val = krogh(tmp) #---------------------------------------# you obtain the following error: #---------------------------------------# >python -u "krogh.py" Traceback (most recent call last): File "krogh.py", line 10, in val = krogh(tmp) File "C:\Python25\lib\site-packages\scipy\interpolate\polyint.py", line 109, in __call__ m = len(x) TypeError: len() of unsized object #---------------------------------------# I have tried to fix it adding the following check in the file polyint.py at line 100 in function __call__(self, x): #-------------------------------------------# if np.size(x)==1: x = np.asarray(x) x = x.tolist() #-------------------------------------------# I have tested it running test_polyint.py and it worked. Your opinions? Marco Nicoletti -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis-bz-gg at t-online.de Thu Oct 15 11:12:12 2009 From: denis-bz-gg at t-online.de (denis) Date: Thu, 15 Oct 2009 08:12:12 -0700 (PDT) Subject: [SciPy-User] idiom for iterators, expr(T) if isscalar(T) else array([expr(t) for t in T]) In-Reply-To: <200910150930.43980.yosefmel@post.tau.ac.il> References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> <200910141402.52413.yosefmel@post.tau.ac.il> <200910150930.43980.yosefmel@post.tau.ac.il> Message-ID: On Oct 15, 9:30?am, Yosef Meller wrote: > > First of all, the candidate for vectorization is f(t) inside spline_2p2s, not > spline_2p2s itself. Did anyone try the testcase, and understand why vectorize BREAKS the function ? It works as written with funciter() but vectorize => broadcasting => ValueError. Vectorize f ? return np.vectorize( f ) (t) # instead of return funciter( f, t ) => ValueError: setting an array element with a sequence > Second, since for each t you may get multiple results > based on the size of pi, mi, you might want to check frompyfunc() instead of > vectorize(). ? p0 p1 m0 m1 are all the same shape, scalars / vecs / arrays to interpolate at one or several `t` s. Anyway frompyfunc broke with ValueError too but I don't understand pydoc numpy.frompyfunc -- and if (googling) vectorize is a wrapper for frompyfunc, how can that help ? cheers -- denis From josef.pktd at gmail.com Thu Oct 15 11:56:26 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 15 Oct 2009 11:56:26 -0400 Subject: [SciPy-User] idiom for iterators, expr(T) if isscalar(T) else array([expr(t) for t in T]) In-Reply-To: References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> <200910141402.52413.yosefmel@post.tau.ac.il> Message-ID: <1cd32cbb0910150856w6d23aa33lf51c113565a47ff7@mail.gmail.com> On Wed, Oct 14, 2009 at 1:23 PM, denis wrote: > On Oct 14, 2:02 pm, Yosef Meller wrote: > >> v_expr = numpy.vectorize(expr) >> v_expr(T) >> >> Is that what you wanted? > > > Yosef, > ?thanks, the right direction -- there must be a numpy primitive for > this. > But 2 problems with vectorize: > 1) an optional arg => TypeError: __call__() got an unexpected keyword > argument 'h' > 2) vectorize => broadcasting => ValueError > Here's a test case: ugly, but funciter() is at least correct :) broadcasting is nice, two changes below and it produces an output without exception. I didn't check whether the output makes sense. I guess vectorize still needs to have compatible array dimension in its arguments for broadcasting. But in your function, vectorize doesn't seem necessary. Josef > > > """ funciter, vectorize ? ?14oct """ > import numpy as np > > def spline_2p2s( t, p0, p1, m0, m1, h=1 ): > ? ?""" Hermite 2-point, 2-slope spline > ? ? ? ?t: a scalar / range / iterator > ? ? ? ?p0 p1 m0 m1: scalars or arrays > ? ? ? ?Beware: t and p0 both vecs => broadcasting => > ? ? ? ? ? ?ValueError: shape mismatch: objects cannot be broadcast to > a single shape > ? ? ? ?(need guidelines, axioms on broadcasting) > ? ?""" > ? ?def f(t): > ? ? ? ?t2 = t*t > ? ? ? ?t3 = t2*t > ? ? ? ?return ( > ? ? ? ? ? ? ?p0 * (2*t3 - 3*t2 + 1) > ? ? ? ? ? ?+ p1 * (-2*t3 + 3*t2) > ? ? ? ? ? ?+ m0 * h * (t3 - 2*t2 + t) > ? ? ? ? ? ?+ m1 * h * (t3 - t2) ) change number 1 - ? ?return funciter( f, t ) + return f(t) #funciter( f, t ) > > def funciter( f, T ): > ? ?return f(T) if np.isscalar(T) \ > ? ? ? ?else np.array([ f(t) for t in T ]) > > #............................................................................... > if __name__ == "__main__": change number two: - ? ?t = np.arange( 0, 1.01, .1 ) + t = np.arange( 0, 1.01, .1 )[:,None] > ? ?p0 = np.array(( 0, 0 )) > ? ?p1 = np.array(( 1, 0 )) > ? ?m0 = np.array(( 1, 1 )) > ? ?m1 = np.array(( 1, -1 )) > > ? ?s = spline_2p2s( t, p0, p1, m0, m1 ) > ? ?print "spline_2p2s", s.T > > ? ?spline_2p2s_vec = np.vectorize( spline_2p2s ) > ? ?s = spline_2p2s_vec( t, p0, p1, m0, m1 ) > ? ?print "spline_2p2s_vec", s.T > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Thu Oct 15 12:13:34 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 15 Oct 2009 12:13:34 -0400 Subject: [SciPy-User] idiom for iterators, expr(T) if isscalar(T) else array([expr(t) for t in T]) In-Reply-To: <1cd32cbb0910150856w6d23aa33lf51c113565a47ff7@mail.gmail.com> References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> <200910141402.52413.yosefmel@post.tau.ac.il> <1cd32cbb0910150856w6d23aa33lf51c113565a47ff7@mail.gmail.com> Message-ID: <1cd32cbb0910150913k60843d4ds7115d7bb0d9d3bdc@mail.gmail.com> On Thu, Oct 15, 2009 at 11:56 AM, wrote: > On Wed, Oct 14, 2009 at 1:23 PM, denis wrote: >> On Oct 14, 2:02 pm, Yosef Meller wrote: >> >>> v_expr = numpy.vectorize(expr) >>> v_expr(T) >>> >>> Is that what you wanted? >> >> >> Yosef, >> ?thanks, the right direction -- there must be a numpy primitive for >> this. >> But 2 problems with vectorize: >> 1) an optional arg => TypeError: __call__() got an unexpected keyword >> argument 'h' >> 2) vectorize => broadcasting => ValueError >> Here's a test case: ugly, but funciter() is at least correct :) > > > broadcasting is nice, two changes below and it produces an output > without exception. > I didn't check whether the output makes sense. I guess vectorize still > needs to have compatible array dimension in its arguments for > broadcasting. But in your function, vectorize doesn't seem necessary. > > Josef > >> >> >> """ funciter, vectorize ? ?14oct """ >> import numpy as np >> >> def spline_2p2s( t, p0, p1, m0, m1, h=1 ): >> ? ?""" Hermite 2-point, 2-slope spline >> ? ? ? ?t: a scalar / range / iterator >> ? ? ? ?p0 p1 m0 m1: scalars or arrays >> ? ? ? ?Beware: t and p0 both vecs => broadcasting => >> ? ? ? ? ? ?ValueError: shape mismatch: objects cannot be broadcast to >> a single shape >> ? ? ? ?(need guidelines, axioms on broadcasting) >> ? ?""" >> ? ?def f(t): >> ? ? ? ?t2 = t*t >> ? ? ? ?t3 = t2*t >> ? ? ? ?return ( >> ? ? ? ? ? ? ?p0 * (2*t3 - 3*t2 + 1) >> ? ? ? ? ? ?+ p1 * (-2*t3 + 3*t2) >> ? ? ? ? ? ?+ m0 * h * (t3 - 2*t2 + t) >> ? ? ? ? ? ?+ m1 * h * (t3 - t2) ) > > change number 1 > > - ? ?return funciter( f, t ) > + ? return f(t) #funciter( f, t ) >> >> def funciter( f, T ): >> ? ?return f(T) if np.isscalar(T) \ >> ? ? ? ?else np.array([ f(t) for t in T ]) >> >> #............................................................................... >> if __name__ == "__main__": > > change number two: > > - ? ?t = np.arange( 0, 1.01, .1 ) > + ? t = np.arange( 0, 1.01, .1 )[:,None] > >> ? ?p0 = np.array(( 0, 0 )) >> ? ?p1 = np.array(( 1, 0 )) >> ? ?m0 = np.array(( 1, 1 )) >> ? ?m1 = np.array(( 1, -1 )) >> >> ? ?s = spline_2p2s( t, p0, p1, m0, m1 ) >> ? ?print "spline_2p2s", s.T >> >> ? ?spline_2p2s_vec = np.vectorize( spline_2p2s ) >> ? ?s = spline_2p2s_vec( t, p0, p1, m0, m1 ) >> ? ?print "spline_2p2s_vec", s.T >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > here's a minimum on broadcasting: >>> p = np.array(( 1, -1 )) >>> p.shape (2,) >>> >>> t = np.arange(3) >>> t.shape (3,) >>> t*p0 Traceback (most recent call last): ValueError: shape mismatch: objects cannot be broadcast to a single shape cannot multiply elementwise two arrays that have different length in the same dimension solution: add a new dimension, turn t into a "column vector" then each (one element) row of `t` is multiplied with each (one element) column of `p` t[:,None] or t[:, np.newaxis] add a new axis and increase dimension of array more explanation in docs >>> np.atleast_2d(t).T array([[0], [1], [2]]) >>> np.atleast_2d(t).T * p array([[ 0, 0], [ 1, -1], [ 2, -2]]) >>> t[:,None].shape (3, 1) >>> t[:,None] * p array([[ 0, 0], [ 1, -1], [ 2, -2]]) Josef From denis-bz-gg at t-online.de Thu Oct 15 13:20:23 2009 From: denis-bz-gg at t-online.de (denis) Date: Thu, 15 Oct 2009 10:20:23 -0700 (PDT) Subject: [SciPy-User] idiom for iterators, expr(T) if isscalar(T) else array([expr(t) for t in T]) In-Reply-To: <1cd32cbb0910150913k60843d4ds7115d7bb0d9d3bdc@mail.gmail.com> References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> <200910141402.52413.yosefmel@post.tau.ac.il> <1cd32cbb0910150856w6d23aa33lf51c113565a47ff7@mail.gmail.com> <1cd32cbb0910150913k60843d4ds7115d7bb0d9d3bdc@mail.gmail.com> Message-ID: On Oct 15, 6:13?pm, josef.p... at gmail.com wrote: > > solution: add a new dimension, turn t into a "column vector" > then each (one element) row of `t` is multiplied with each (one > element) column of `p` > > t[:,None] or t[:, np.newaxis] add a new axis and increase dimension of array > Josef Nice, that turns off broadcasting t * p0 etc. which is exactly what's needed here -- until the evil day when some caller passes in column vecs for p0 etc. and broadcasting kicks in, wrong. Is there a general way of turning off broadcasting in particular functions ? In interpolation one often has func( t, x ) where t may be a scalar or range, x anything, and wants exactly funciter() -- ugly, slow, correct. Bytheway .../site-packages/numpy/doc/broadcasting.py in the dist has pretty good examples From josef.pktd at gmail.com Thu Oct 15 13:30:43 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 15 Oct 2009 13:30:43 -0400 Subject: [SciPy-User] idiom for iterators, expr(T) if isscalar(T) else array([expr(t) for t in T]) In-Reply-To: References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> <200910141402.52413.yosefmel@post.tau.ac.il> <1cd32cbb0910150856w6d23aa33lf51c113565a47ff7@mail.gmail.com> <1cd32cbb0910150913k60843d4ds7115d7bb0d9d3bdc@mail.gmail.com> Message-ID: <1cd32cbb0910151030t1a526ffv57a4c72df3d0e510@mail.gmail.com> On Thu, Oct 15, 2009 at 1:20 PM, denis wrote: > On Oct 15, 6:13?pm, josef.p... at gmail.com wrote: >> >> solution: add a new dimension, turn t into a "column vector" >> then each (one element) row of `t` is multiplied with each (one >> element) column of `p` >> >> t[:,None] or t[:, np.newaxis] add a new axis and increase dimension of array > >> Josef > > Nice, that turns off broadcasting t * p0 etc. which is exactly what's this turn on broadcasting, or better makes it possible for numpy to do the broadcasting. > needed here -- > until the evil day when some caller passes in column vecs for p0 etc. > and broadcasting kicks in, wrong. > > Is there a general way of turning off broadcasting in particular > functions ? > In interpolation one often has func( t, x ) where t may be a scalar or > range, x anything, > and wants exactly funciter() -- ugly, slow, correct. The usual way to handle this in a function is to adjust the dimension and shape, e.g with atleast_1d or atleast_2d, and add or change axis if necessary. There are a lot of examples for this interface in the numpy, scipy code. If p0,... and t are always required to be at most one dimensional, then you could do asarray, ravel and then add the axis with [:, None] or similar. If it's supposed to work with a t that has many rows and columns (2d), then you would need a third axis and it will be a little bit more complicated. Josef > > Bytheway .../site-packages/numpy/doc/broadcasting.py in the dist has > pretty good examples > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From d.l.goldsmith at gmail.com Thu Oct 15 14:17:21 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 15 Oct 2009 11:17:21 -0700 Subject: [SciPy-User] matlab's built-in colormaps in scipy? Message-ID: <45d1ab480910151117r71003ff4ufec3f73f5c1eb7bf@mail.gmail.com> Or only via matplotlib? If they are available in scipy, where exactly? Thanks! DG -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Oct 15 14:45:11 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 15 Oct 2009 13:45:11 -0500 Subject: [SciPy-User] matlab's built-in colormaps in scipy? In-Reply-To: <45d1ab480910151117r71003ff4ufec3f73f5c1eb7bf@mail.gmail.com> References: <45d1ab480910151117r71003ff4ufec3f73f5c1eb7bf@mail.gmail.com> Message-ID: <3d375d730910151145y7238989fp2025af796991d8dc@mail.gmail.com> On Thu, Oct 15, 2009 at 13:17, David Goldsmith wrote: > Or only via matplotlib?? If they are available in scipy, where exactly? There are no colormaps in scipy. scipy does no plotting. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From d.l.goldsmith at gmail.com Thu Oct 15 15:11:09 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 15 Oct 2009 12:11:09 -0700 Subject: [SciPy-User] matlab's built-in colormaps in scipy? In-Reply-To: <3d375d730910151145y7238989fp2025af796991d8dc@mail.gmail.com> References: <45d1ab480910151117r71003ff4ufec3f73f5c1eb7bf@mail.gmail.com> <3d375d730910151145y7238989fp2025af796991d8dc@mail.gmail.com> Message-ID: <45d1ab480910151211m5a0410c4wdfda53776a2bd267@mail.gmail.com> OK. John H. (or anyone): please remind: where in matplotlib are the matlab-like colormaps? Thanks! DG On Thu, Oct 15, 2009 at 11:45 AM, Robert Kern wrote: > On Thu, Oct 15, 2009 at 13:17, David Goldsmith > wrote: > > Or only via matplotlib? If they are available in scipy, where exactly? > > There are no colormaps in scipy. scipy does no plotting. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Oct 15 15:16:13 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 15 Oct 2009 14:16:13 -0500 Subject: [SciPy-User] matlab's built-in colormaps in scipy? In-Reply-To: <45d1ab480910151211m5a0410c4wdfda53776a2bd267@mail.gmail.com> References: <45d1ab480910151117r71003ff4ufec3f73f5c1eb7bf@mail.gmail.com> <3d375d730910151145y7238989fp2025af796991d8dc@mail.gmail.com> <45d1ab480910151211m5a0410c4wdfda53776a2bd267@mail.gmail.com> Message-ID: <3d375d730910151216j5c099f8fqbb93ec0df9687f09@mail.gmail.com> On Thu, Oct 15, 2009 at 14:11, David Goldsmith wrote: > OK.? John H. (or anyone): please remind: where in matplotlib are the > matlab-like colormaps?? Thanks! Where all the colormaps are: in matplotlib/_cm.py -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From d.l.goldsmith at gmail.com Thu Oct 15 15:16:43 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 15 Oct 2009 12:16:43 -0700 Subject: [SciPy-User] matlab's built-in colormaps in scipy? In-Reply-To: <45d1ab480910151211m5a0410c4wdfda53776a2bd267@mail.gmail.com> References: <45d1ab480910151117r71003ff4ufec3f73f5c1eb7bf@mail.gmail.com> <3d375d730910151145y7238989fp2025af796991d8dc@mail.gmail.com> <45d1ab480910151211m5a0410c4wdfda53776a2bd267@mail.gmail.com> Message-ID: <45d1ab480910151216g25035f5bje48a6951c6ab99ba@mail.gmail.com> Duh! Thanks for waking me up, Robert: if scipy doesn't do any plotting, how was I making my plots? Answer: I'm already using Image: anyone know if and where Image has matlab-like colormaps? DG On Thu, Oct 15, 2009 at 12:11 PM, David Goldsmith wrote: > OK. John H. (or anyone): please remind: where in matplotlib are the > matlab-like colormaps? Thanks! > > DG > > > On Thu, Oct 15, 2009 at 11:45 AM, Robert Kern wrote: > >> On Thu, Oct 15, 2009 at 13:17, David Goldsmith >> wrote: >> > Or only via matplotlib? If they are available in scipy, where exactly? >> >> There are no colormaps in scipy. scipy does no plotting. >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, a harmless >> enigma that is made terrible by our own mad attempt to interpret it as >> though it had an underlying truth." >> -- Umberto Eco >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Oct 15 15:18:13 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 15 Oct 2009 14:18:13 -0500 Subject: [SciPy-User] matlab's built-in colormaps in scipy? In-Reply-To: <45d1ab480910151216g25035f5bje48a6951c6ab99ba@mail.gmail.com> References: <45d1ab480910151117r71003ff4ufec3f73f5c1eb7bf@mail.gmail.com> <3d375d730910151145y7238989fp2025af796991d8dc@mail.gmail.com> <45d1ab480910151211m5a0410c4wdfda53776a2bd267@mail.gmail.com> <45d1ab480910151216g25035f5bje48a6951c6ab99ba@mail.gmail.com> Message-ID: <3d375d730910151218q2b72cf4ev275d48897d09662f@mail.gmail.com> On Thu, Oct 15, 2009 at 14:16, David Goldsmith wrote: > Duh!? Thanks for waking me up, Robert: if scipy doesn't do any plotting, how > was I making my plots?? Answer: I'm already using Image: anyone know if and > where Image has matlab-like colormaps? It doesn't. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From d.l.goldsmith at gmail.com Thu Oct 15 15:31:37 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 15 Oct 2009 12:31:37 -0700 Subject: [SciPy-User] matlab's built-in colormaps in scipy? In-Reply-To: <3d375d730910151218q2b72cf4ev275d48897d09662f@mail.gmail.com> References: <45d1ab480910151117r71003ff4ufec3f73f5c1eb7bf@mail.gmail.com> <3d375d730910151145y7238989fp2025af796991d8dc@mail.gmail.com> <45d1ab480910151211m5a0410c4wdfda53776a2bd267@mail.gmail.com> <45d1ab480910151216g25035f5bje48a6951c6ab99ba@mail.gmail.com> <3d375d730910151218q2b72cf4ev275d48897d09662f@mail.gmail.com> Message-ID: <45d1ab480910151231x63f2a34u260377e6b216399b@mail.gmail.com> OK, thanks! DG On Thu, Oct 15, 2009 at 12:18 PM, Robert Kern wrote: > On Thu, Oct 15, 2009 at 14:16, David Goldsmith > wrote: > > Duh! Thanks for waking me up, Robert: if scipy doesn't do any plotting, > how > > was I making my plots? Answer: I'm already using Image: anyone know if > and > > where Image has matlab-like colormaps? > > It doesn't. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Thu Oct 15 16:45:42 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 15 Oct 2009 16:45:42 -0400 Subject: [SciPy-User] Bug in interpolate.polyint.py and fix In-Reply-To: References: Message-ID: Hi, Pauli Virtanen beat me to a fix: http://projects.scipy.org/scipy/changeset/5966 I took a more general approach, allowing polyint objects to be applied to arrays of any dimension: http://projects.scipy.org/scipy/ticket/1021 Anne 2009/10/15 Marco Nicoletti : > Dear all, > > I guess I found a bug in the module scipy.interpolate.polyint.py. > > If you run the following lines: > > #-----------krogh.py---------------# > from scipy import interpolate > import numpy > # Krogh example > xi = [0,3,4,10] > yi = [10,3,4,20] > krogh = interpolate.KroghInterpolator(xi, yi) > # Evaluate in one single point > tmp = 0.5 > tmp = numpy.asarray(tmp) > val = krogh(tmp) > #---------------------------------------# > > you obtain the following error: > > #---------------------------------------# >>python -u "krogh.py" > Traceback (most recent call last): > ? File "krogh.py", line 10, in > ??? val = krogh(tmp) > ? File "C:\Python25\lib\site-packages\scipy\interpolate\polyint.py", line > 109, in __call__ > ??? m = len(x) > TypeError: len() of unsized object > #---------------------------------------# > > I have tried to fix it adding?the following?check in the file polyint.py at > line 100 in function __call__(self, x): > > #-------------------------------------------# > ??????? if np.size(x)==1: > ??????????? x = np.asarray(x) > ??????????? x = x.tolist() > #-------------------------------------------# > > I have tested it running test_polyint.py and it worked. > Your opinions? > > Marco Nicoletti > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From nicoletti at consorzio-innova.it Fri Oct 16 02:58:57 2009 From: nicoletti at consorzio-innova.it (Marco Nicoletti) Date: Fri, 16 Oct 2009 08:58:57 +0200 Subject: [SciPy-User] scipy.interpolate.KroghInterpolator: output confusion. Message-ID: <49163BE14635496FAC4E4B87302F9DAD@innova.locale> Dear all, I have found something that loks like a bug (or at least a confusing staff) in the output of KroghIntepolator class. The problem is the following: case 1 from scipy import interpolate import numpy as np xi = [2, 3, 4] yi = [10,13,15] yi_der = [1, 2, 2.5] Y = np.asarray([yi, yi_der]).transpose() krogh = interpolate.KroghInterpolator(xi, Y) print krogh(2.3) >> [ 11.005 1.3525] Evaluating one point 2.3, it returns a 1-D array with array([value, derivative value]). case 2 from scipy import interpolate import numpy as np xi = [2, 3, 4] yi = [10,13,15] krogh = interpolate.KroghInterpolator(xi, yi) print krogh([2.3, 4.5]) >> [ 11.005 15.625] Evaluating points [2.3, 4.5], it returns a 1-D array with array([value for 2.3, value for 4.5]). You cannot distinguish the two cases. In my opinion case 1 should return a 2-D array as follows array([[value, derivative value]]); instead case 2 should return a 2-D array as follows array([[value for 2.3], [value for 4.5]]). what do you think? confusing? Marco Nicoletti -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Fri Oct 16 03:40:09 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 16 Oct 2009 03:40:09 -0400 Subject: [SciPy-User] scipy.interpolate.KroghInterpolator: output confusion. In-Reply-To: <49163BE14635496FAC4E4B87302F9DAD@innova.locale> References: <49163BE14635496FAC4E4B87302F9DAD@innova.locale> Message-ID: 2009/10/16 Marco Nicoletti : > Dear all, > > I have found something that loks like a bug (or at least a confusing staff) > in the output of KroghIntepolator class. > > The problem is the following: > > case 1 > from scipy import interpolate > import numpy as np > xi = [2, 3, 4] > yi = [10,13,15] > yi_der = [1, 2, 2.5] > Y = np.asarray([yi, yi_der]).transpose() > krogh = interpolate.KroghInterpolator(xi, Y) > print krogh(2.3) >>> [ 11.005??? 1.3525] > Evaluating one point 2.3, it returns a 1-D array with array([value, > derivative value]). This is not what the interpolator is doing. You have constructed an interpolator that takes no derivative information, but produces vector values. To specify derivative information, you must repeat the x value in question. So to do the interpolation you were intending, you should do: krogh = interpolate.KroghInterpolator([2,2,3,3,4,4],[10,1,13,2,15,2.5]) Calling this object then yields scalars. If you want derivatives, you can ask for as many as you want with krogh.derivatives, which will provide you with an array, or krogh.derivative, which will provide you with a particular one. This way of specifying derivatives is a little peculiar, but it is what the underlying algorithm needs, and it also allows you to have different numbers of derivatives at each point (as with the example in the docstring). This odd calling convention is actually explained in the docstring, although it's not emphasized as much as it might be. I should also suggest you be careful with very high-degree polynomials: if you're using them to fit data with noise, you are liable to get high-degree terms that thrash wildly around trying to match your noise. This is one reason splines - piecewise low-degree polynomials - are so popular. Anne From antonio.valentino at tiscali.it Fri Oct 16 05:18:57 2009 From: antonio.valentino at tiscali.it (Antonio Valentino) Date: Fri, 16 Oct 2009 11:18:57 +0200 Subject: [SciPy-User] scipy.interpolate.KroghInterpolator: output confusion. In-Reply-To: References: <49163BE14635496FAC4E4B87302F9DAD@innova.locale> Message-ID: <20091016111857.0bb41e03@asigrid01> Hi Anne, Il giorno Fri, 16 Oct 2009 03:40:09 -0400 Anne Archibald ha scritto: > 2009/10/16 Marco Nicoletti : > > Dear all, > > > > I have found something that loks like a bug (or at least a > > confusing staff) in the output of KroghIntepolator class. > > > > The problem is the following: > > > > case 1 > > from scipy import interpolate > > import numpy as np > > xi = [2, 3, 4] > > yi = [10,13,15] > > yi_der = [1, 2, 2.5] > > Y = np.asarray([yi, yi_der]).transpose() > > krogh = interpolate.KroghInterpolator(xi, Y) > > print krogh(2.3) > >>> [ 11.005??? 1.3525] > > Evaluating one point 2.3, it returns a 1-D array with array([value, > > derivative value]). > > This is not what the interpolator is doing. You have constructed an > interpolator that takes no derivative information, but produces vector > values. To specify derivative information, you must repeat the x value > in question. So to do the interpolation you were intending, you should > do: > > krogh = > interpolate.KroghInterpolator([2,2,3,3,4,4],[10,1,13,2,15,2.5]) > > Calling this object then yields scalars. If you want derivatives, you > can ask for as many as you want with krogh.derivatives, which will > provide you with an array, or krogh.derivative, which will provide you > with a particular one. > > This way of specifying derivatives is a little peculiar, but it is > what the underlying algorithm needs, and it also allows you to have > different numbers of derivatives at each point (as with the example in > the docstring). the example seems to conflicts with the rest of the docstring that clearly states: """ Parameters ---------- xi : array-like, length N known x-coordinates yi : array-like, N by R known y-coordinates, interpreted as vectors of length R, or scalars if R=1 """ xi: N yi: NxR with R = number of derivatives available at each point. Is it a bug in the documentation? I think that the one described in the docstring is a more handy interface with respect to the one of the example. > This odd calling convention is actually explained in the docstring, > although it's not emphasized as much as it might be. > > I should also suggest you be careful with very high-degree > polynomials: if you're using them to fit data with noise, you are > liable to get high-degree terms that thrash wildly around trying to > match your noise. This is one reason splines - piecewise low-degree > polynomials - are so popular. > > > Anne > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Antonio Valentino From peridot.faceted at gmail.com Fri Oct 16 05:36:39 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 16 Oct 2009 05:36:39 -0400 Subject: [SciPy-User] scipy.interpolate.KroghInterpolator: output confusion. In-Reply-To: <20091016111857.0bb41e03@asigrid01> References: <49163BE14635496FAC4E4B87302F9DAD@innova.locale> <20091016111857.0bb41e03@asigrid01> Message-ID: 2009/10/16 Antonio Valentino : > Hi Anne, > > Il giorno Fri, 16 Oct 2009 03:40:09 -0400 > Anne Archibald ha scritto: > >> 2009/10/16 Marco Nicoletti : >> > Dear all, >> > >> > I have found something that loks like a bug (or at least a >> > confusing staff) in the output of KroghIntepolator class. >> > >> > The problem is the following: >> > >> > case 1 >> > from scipy import interpolate >> > import numpy as np >> > xi = [2, 3, 4] >> > yi = [10,13,15] >> > yi_der = [1, 2, 2.5] >> > Y = np.asarray([yi, yi_der]).transpose() >> > krogh = interpolate.KroghInterpolator(xi, Y) >> > print krogh(2.3) >> >>> [ 11.005??? 1.3525] >> > Evaluating one point 2.3, it returns a 1-D array with array([value, >> > derivative value]). >> >> This is not what the interpolator is doing. You have constructed an >> interpolator that takes no derivative information, but produces vector >> values. To specify derivative information, you must repeat the x value >> in question. So to do the interpolation you were intending, you should >> do: >> >> krogh = >> interpolate.KroghInterpolator([2,2,3,3,4,4],[10,1,13,2,15,2.5]) >> >> Calling this object then yields scalars. If you want derivatives, you >> can ask for as many as you want with krogh.derivatives, which will >> provide you with an array, or krogh.derivative, which will provide you >> with a particular one. >> >> This way of specifying derivatives is a little peculiar, but it is >> what the underlying algorithm needs, and it also allows you to have >> different numbers of derivatives at each point (as with the example in >> the docstring). > > the example seems to conflicts with the rest of the docstring that > clearly states: > > """ > ? ?Parameters > ? ?---------- > ? ?xi : array-like, length N > ? ? ? ?known x-coordinates > ? ?yi : array-like, N by R > ? ? ? ?known y-coordinates, interpreted as vectors of length R, > ? ? ? ?or scalars if R=1 > """ The docstring is a little confusing, I admit, because some yi may be derivative values rather than y-values. But they are indeed vectors of length R. This allows you to, for example, construct a cubic curve in space, so that p(t) = (x,y,z). > xi: N > yi: NxR with R = number of derivatives available at each point. This is not what the docstring says, and this is not how the code works. You do not provide derivative values in this way. Plot the polynomial values you get out and you will see. > Is it a bug in the documentation? Obviously the docstring isn't clear enough, since users are being confused by it. > I think that the one described in the docstring is a more handy > interface with respect to the one of the example. The problem with the interface you are suggesting, in which you provide a matrix of function and derivative values, is that it forces you to specify the same number of derivatives at each point. The current interface allows you to specify more derivatives at some points than others. If you want to convert, you can do something like: xs = np.repeat(xi,2) ys = np.ravel(np.dstack((yi,ypi))) In the docstring itself you see an example of such construction: >>> KroghInterpolator([0,0,1],[0,2,0]) This constructs a quadratic that is zero at zero, zero at one, and has a derivative of 2 at one - that is, the quadratic 2*x**2-2*x. There is no constraint on the derivative at 1. Anne From antonio.valentino at tiscali.it Fri Oct 16 06:40:51 2009 From: antonio.valentino at tiscali.it (Antonio Valentino) Date: Fri, 16 Oct 2009 12:40:51 +0200 Subject: [SciPy-User] scipy.interpolate.KroghInterpolator: output confusion. In-Reply-To: References: <49163BE14635496FAC4E4B87302F9DAD@innova.locale> <20091016111857.0bb41e03@asigrid01> Message-ID: <20091016124051.38dbecfb@asigrid01> Il giorno Fri, 16 Oct 2009 05:36:39 -0400 Anne Archibald ha scritto: > 2009/10/16 Antonio Valentino : > > Hi Anne, > > > > Il giorno Fri, 16 Oct 2009 03:40:09 -0400 > > Anne Archibald ha scritto: [CUT] > >> vector values. To specify derivative information, you must repeat > >> the x value in question. So to do the interpolation you were > >> intending, you should do: > >> > >> krogh = > >> interpolate.KroghInterpolator([2,2,3,3,4,4],[10,1,13,2,15,2.5]) > >> > >> Calling this object then yields scalars. If you want derivatives, > >> you can ask for as many as you want with krogh.derivatives, which > >> will provide you with an array, or krogh.derivative, which will > >> provide you with a particular one. > >> > >> This way of specifying derivatives is a little peculiar, but it is > >> what the underlying algorithm needs, and it also allows you to have > >> different numbers of derivatives at each point (as with the > >> example in the docstring). > > > > the example seems to conflicts with the rest of the docstring that > > clearly states: > > > > """ > > ? ?Parameters > > ? ?---------- > > ? ?xi : array-like, length N > > ? ? ? ?known x-coordinates > > ? ?yi : array-like, N by R > > ? ? ? ?known y-coordinates, interpreted as vectors of length R, > > ? ? ? ?or scalars if R=1 > > """ > > The docstring is a little confusing, I admit, because some yi may be > derivative values rather than y-values. But they are indeed vectors of > length R. This allows you to, for example, construct a cubic curve in > space, so that p(t) = (x,y,z). Oh, I had completely misunderstood! Thanks for clarification. > > xi: N > > yi: NxR with R = number of derivatives available at each point. > > This is not what the docstring says, and this is not how the code > works. You do not provide derivative values in this way. Plot the > polynomial values you get out and you will see. > > > Is it a bug in the documentation? > > Obviously the docstring isn't clear enough, since users are being > confused by it. Yes, my suggestion is to add a lot of examples > > I think that the one described in the docstring is a more handy > > interface with respect to the one of the example. > > The problem with the interface you are suggesting, in which you > provide a matrix of function and derivative values, is that it forces > you to specify the same number of derivatives at each point. The > current interface allows you to specify more derivatives at some > points than others. If you want to convert, you can do something like: again this aspect can be deduced from the example but not clearly stated in the description > xs = np.repeat(xi,2) > ys = np.ravel(np.dstack((yi,ypi))) > > In the docstring itself you see an example of such construction: > > >>> KroghInterpolator([0,0,1],[0,2,0]) > > This constructs a quadratic that is zero at zero, zero at one, and has > a derivative of 2 at one - that is, the quadratic 2*x**2-2*x. There is > no constraint on the derivative at 1. thanks -- Antonio Valentino From denis-bz-gg at t-online.de Fri Oct 16 12:12:54 2009 From: denis-bz-gg at t-online.de (denis) Date: Fri, 16 Oct 2009 09:12:54 -0700 (PDT) Subject: [SciPy-User] idiom for iterators, expr(T) if isscalar(T) else array([expr(t) for t in T]) In-Reply-To: <1cd32cbb0910151030t1a526ffv57a4c72df3d0e510@mail.gmail.com> References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> <200910141402.52413.yosefmel@post.tau.ac.il> <1cd32cbb0910150856w6d23aa33lf51c113565a47ff7@mail.gmail.com> <1cd32cbb0910150913k60843d4ds7115d7bb0d9d3bdc@mail.gmail.com> <1cd32cbb0910151030t1a526ffv57a4c72df3d0e510@mail.gmail.com> Message-ID: <663bd01e-9ca6-4df6-9b4d-cba3b65cdb0a@z2g2000yqm.googlegroups.com> Thanks Josef, I've summarized this Q+A in http://advice.mechanicalkern.com of course, please edit it if you like. (That looks like a good place for such Q+A s, or http://stackoverflow.com/questions/tagged/scipy ? a separate discussion). cheers -- denis From josef.pktd at gmail.com Fri Oct 16 13:10:06 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 16 Oct 2009 13:10:06 -0400 Subject: [SciPy-User] idiom for iterators, expr(T) if isscalar(T) else array([expr(t) for t in T]) In-Reply-To: <663bd01e-9ca6-4df6-9b4d-cba3b65cdb0a@z2g2000yqm.googlegroups.com> References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> <200910141402.52413.yosefmel@post.tau.ac.il> <1cd32cbb0910150856w6d23aa33lf51c113565a47ff7@mail.gmail.com> <1cd32cbb0910150913k60843d4ds7115d7bb0d9d3bdc@mail.gmail.com> <1cd32cbb0910151030t1a526ffv57a4c72df3d0e510@mail.gmail.com> <663bd01e-9ca6-4df6-9b4d-cba3b65cdb0a@z2g2000yqm.googlegroups.com> Message-ID: <1cd32cbb0910161010ge3eb08ela32141fea19ef68e@mail.gmail.com> On Fri, Oct 16, 2009 at 12:12 PM, denis wrote: > Thanks Josef, > > ?I've summarized this Q+A in http://advice.mechanicalkern.com > of course, please edit it if you like. > (That looks like a good place for such Q+A s, or > http://stackoverflow.com/questions/tagged/scipy ? > a separate discussion). > > cheers > ?-- denis One problem with doing the broadcasting for the user, is that it is not always clear what the intention of the user is, although it might be very suggestive from the context. In your example: # Worse, print lerp( np.linspace( .1, .5, 3 ), p0, p1 ) # => a nonsense result, with no warning This could also be exactly what the user wants, evaluate the function at 3 points, taking one value from each array. Or, if p0 and p1 are column vectors and t is 1d or row vector, the user would get correctly broadcasted values but in row order. I like your penalized least squares problem (more general than Ridge Regression) A few comments to the broadcasting example: I still think that "restrict" is not really the right word in "A way to restrict broadcasting in such cases", better would be "to force correct broadcasting". To avoid the problem, with 2-dim arguments, I would ravel t and the p's first Your scalar check doesn't check the dimension of p1. Thanks for contributing to the docs. Josef From robert.kern at gmail.com Fri Oct 16 15:17:56 2009 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 16 Oct 2009 14:17:56 -0500 Subject: [SciPy-User] idiom for iterators, expr(T) if isscalar(T) else array([expr(t) for t in T]) In-Reply-To: <663bd01e-9ca6-4df6-9b4d-cba3b65cdb0a@z2g2000yqm.googlegroups.com> References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> <200910141402.52413.yosefmel@post.tau.ac.il> <1cd32cbb0910150856w6d23aa33lf51c113565a47ff7@mail.gmail.com> <1cd32cbb0910150913k60843d4ds7115d7bb0d9d3bdc@mail.gmail.com> <1cd32cbb0910151030t1a526ffv57a4c72df3d0e510@mail.gmail.com> <663bd01e-9ca6-4df6-9b4d-cba3b65cdb0a@z2g2000yqm.googlegroups.com> Message-ID: <3d375d730910161217y49c7ddd5p4e2d98bd3295af78@mail.gmail.com> On Fri, Oct 16, 2009 at 11:12, denis wrote: > Thanks Josef, > > ?I've summarized this Q+A in http://advice.mechanicalkern.com > of course, please edit it if you like. > (That looks like a good place for such Q+A s, or > http://stackoverflow.com/questions/tagged/scipy ? > a separate discussion). advice.mechanicalkern.com is intended to be a demo site for something that will go up on scipy.org. Unfortunately, I have not had the time to devote to making the transition. Using StackOverflow itself would have more permanence until the scipy.org site is ready. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From perfreem at gmail.com Sat Oct 17 11:36:26 2009 From: perfreem at gmail.com (per freem) Date: Sat, 17 Oct 2009 11:36:26 -0400 Subject: [SciPy-User] vectorized version of logsumexp? (from scipy.maxentropy) Message-ID: hi all, in my code, i use the function 'logsumexp' from scipy.maxentropy a lot. as far as i can tell, this function has no vectorized version that works on an m-x-n matrix. i might be doing something wrong here, but i found that this function can run extremely slowly if used as follows: i have an array of log probability vectors, such that each column sums to one. i want to simply iterate over each column and renormalize it, using exp(col - logsumexp(col)). here is the code that i used to profile this operation: from scipy import * from numpy import * from numpy.random.mtrand import dirichlet from scipy.maxentropy import logsumexp import time # build an array of probability vectors. each column represents a probability vector. num_vectors = 1000000 log_prob_vectors = transpose(log(dirichlet([1, 1, 1], num_vectors))) # now renormalize each column, using logsumexp norm_prob_vectors = [] t1 = time.time() for n in range(num_vectors): norm_p = exp(log_prob_vectors[:, n] - logsumexp(log_prob_vectors[:, n])) norm_prob_vectors.append(norm_p) t2 = time.time() norm_prob_vectors = array(norm_prob_vectors) print "logsumexp renormalization (%d many times) took %s seconds." %(num_vectors, str(t2-t1)) i found that even with only 100,000 elements, this code takes about 5 seconds: logsumexp renormalization (100000 many times) took 5.07085394859 seconds. with 1 million elements, it becomes prohibitively slow: logsumexp renormalization (1000000 many times) took 70.7815010548 seconds. is there a way to speed this up? most vectorized operations that work on matrices in numpy/scipy are incredibly fast and it seems like a vectorized version of logsumexp should be near instant on this scale. is there a way to rewrite the above snippet so that it's faster? thanks very much for your help. From warren.weckesser at enthought.com Sat Oct 17 17:37:46 2009 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sat, 17 Oct 2009 16:37:46 -0500 Subject: [SciPy-User] vectorized version of logsumexp? (from scipy.maxentropy) In-Reply-To: References: Message-ID: <4ADA392A.9010005@enthought.com> Hi, Here's the code for logsumexp from scipy.maxentropy: ---------- def logsumexp(a): """Compute the log of the sum of exponentials log(e^{a_1}+...e^{a_n}) of the components of the array a, avoiding numerical overflow. """ a = asarray(a) a_max = a.max() return a_max + log((exp(a-a_max)).sum()) ---------- What's missing is an 'axis' keyword argument to restrict the reduction to a single axis of the array. Here's a version with an 'axis' keyword: ---------- def my_logsumexp(a, axis=None): if axis is None: # Use the scipy.maxentropy version. return logsumexp(a) a = asarray(a) shp = list(a.shape) shp[axis] = 1 a_max = a.max(axis=axis) s = log(exp(a - a_max.reshape(shp)).sum(axis=axis)) lse = a_max + s return lse ---------- The attached script includes this function, and runs your calculation twice, once as you originally wrote it, and once using my_logsumexp. Depending on the number of vectors, the new version is 75 to 100 times faster. Cheers, Warren per freem wrote: > hi all, > > in my code, i use the function 'logsumexp' from scipy.maxentropy a > lot. as far as i can tell, this function has no vectorized version > that works on an m-x-n matrix. i might be doing something wrong here, > but i found that this function can run extremely slowly if used as > follows: i have an array of log probability vectors, such that each > column sums to one. i want to simply iterate over each column and > renormalize it, using exp(col - logsumexp(col)). here is the code that > i used to profile this operation: > > from scipy import * > from numpy import * > from numpy.random.mtrand import dirichlet > from scipy.maxentropy import logsumexp > import time > > # build an array of probability vectors. each column represents a > probability vector. > num_vectors = 1000000 > log_prob_vectors = transpose(log(dirichlet([1, 1, 1], num_vectors))) > # now renormalize each column, using logsumexp > norm_prob_vectors = [] > t1 = time.time() > for n in range(num_vectors): > norm_p = exp(log_prob_vectors[:, n] - logsumexp(log_prob_vectors[:, n])) > norm_prob_vectors.append(norm_p) > t2 = time.time() > norm_prob_vectors = array(norm_prob_vectors) > print "logsumexp renormalization (%d many times) took %s seconds." > %(num_vectors, str(t2-t1)) > > i found that even with only 100,000 elements, this code takes about 5 seconds: > > logsumexp renormalization (100000 many times) took 5.07085394859 seconds. > > with 1 million elements, it becomes prohibitively slow: > > logsumexp renormalization (1000000 many times) took 70.7815010548 seconds. > > is there a way to speed this up? most vectorized operations that work > on matrices in numpy/scipy are incredibly fast and it seems like a > vectorized version of logsumexp should be near instant on this scale. > is there a way to rewrite the above snippet so that it's faster? > > thanks very much for your help. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: my_logsumexp.py URL: From denis-bz-gg at t-online.de Sun Oct 18 07:34:42 2009 From: denis-bz-gg at t-online.de (denis) Date: Sun, 18 Oct 2009 04:34:42 -0700 (PDT) Subject: [SciPy-User] idiom for iterators, expr(T) if isscalar(T) else array([expr(t) for t in T]) In-Reply-To: <1cd32cbb0910161010ge3eb08ela32141fea19ef68e@mail.gmail.com> References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> <200910141402.52413.yosefmel@post.tau.ac.il> <1cd32cbb0910150856w6d23aa33lf51c113565a47ff7@mail.gmail.com> <1cd32cbb0910150913k60843d4ds7115d7bb0d9d3bdc@mail.gmail.com> <1cd32cbb0910151030t1a526ffv57a4c72df3d0e510@mail.gmail.com> <663bd01e-9ca6-4df6-9b4d-cba3b65cdb0a@z2g2000yqm.googlegroups.com> <1cd32cbb0910161010ge3eb08ela32141fea19ef68e@mail.gmail.com> Message-ID: <4492e6cf-613b-47b2-a3ec-418b1710a939@e8g2000yqo.googlegroups.com> On Oct 16, 7:10?pm, josef.p... at gmail.com wrote: ... > print lerp( np.linspace( .1, .5, 3 ), p0, p1 ) > ? ? # => a nonsense result, with no warning > > This could also be exactly what the user wants, evaluate the function > at 3 points, taking one value from each array. > Or, if p0 and p1 are column vectors and t is 1d or row vector, the > user would get correctly broadcasted values but in row order. When interpolating colors, you want a color; in general, you want interpolated.shape == input.shape, yes assert t is 1d or row. > A few comments to the broadcasting example: > I still think that "restrict" is not really the right word in "A way > to restrict broadcasting in such cases", better would be "to force > correct broadcasting". How about "limit" ? "force" is strong > To avoid the problem, with 2-dim arguments, I would ravel ?t and the p's first I'm afraid that would increase my (neophyte) FUD, fearUncertaintyand Doubt: could there be inputs of shape ... that someday don't work Better assert or raise NotImplementedError than that ! Do you and other experts have edit-anything rights in advice.mechanicalkern.com ? I'd find that good -- fix it once. cheers -- denis From denis-bz-gg at t-online.de Sun Oct 18 09:13:40 2009 From: denis-bz-gg at t-online.de (denis) Date: Sun, 18 Oct 2009 06:13:40 -0700 (PDT) Subject: [SciPy-User] idiom for iterators, expr(T) if isscalar(T) else array([expr(t) for t in T]) In-Reply-To: <3d375d730910161217y49c7ddd5p4e2d98bd3295af78@mail.gmail.com> References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> <200910141402.52413.yosefmel@post.tau.ac.il> <1cd32cbb0910150856w6d23aa33lf51c113565a47ff7@mail.gmail.com> <1cd32cbb0910150913k60843d4ds7115d7bb0d9d3bdc@mail.gmail.com> <1cd32cbb0910151030t1a526ffv57a4c72df3d0e510@mail.gmail.com> <663bd01e-9ca6-4df6-9b4d-cba3b65cdb0a@z2g2000yqm.googlegroups.com> <3d375d730910161217y49c7ddd5p4e2d98bd3295af78@mail.gmail.com> Message-ID: <0eb660e1-f1b7-4210-97a8-db79e9229abe@p4g2000yqm.googlegroups.com> On Oct 16, 9:17?pm, Robert Kern wrote: > Using StackOverflow itself would > have more permanence until the scipy.org site is ready. How about stackoverflow for daily questions, and advice.mechanicalkern / a corner of scipy for microtutorials, 1-2 pages, starting with an a "wanted: microtutorials on ..." page for requests ? Or, just leave the field to stackoverflow ? There's an expert discussion on Scipy Doc Project / scipy foundation 31 July http://permalink.gmane.org/gmane.comp.python.scientific.devel/11384 > stackoverflow is a a mix between a FAQ > and wikipedia. It works extremely well ... From contact at pythonxy.com Sun Oct 18 14:11:20 2009 From: contact at pythonxy.com (Pierre Raybaut) Date: Sun, 18 Oct 2009 20:11:20 +0200 Subject: [SciPy-User] [ANN] Python(x,y) 2.6.3.0 released Message-ID: <4ADB5A48.1030605@pythonxy.com> Hi all, I'm quite pleased (and relieved) to announce that Python(x,y) version 2.6.3.0 has been released. It is the first release based on Python 2.6 -- note that Python(x,y) version number will now follow the included Python version (Python(x,y) vX.Y.Z.N will be based on Python vX.Y.Z). Python(x,y) is a free Python distribution providing a ready-to-use scientific development software for numerical computations, data analysis and data visualization based on Python programming language, Qt graphical user interfaces (and development framework), Eclipse integrated development environment and Spyder interactive development environment. Its purpose is to help scientific programmers used to interpreted languages (such as MATLAB or IDL) or compiled languages (C/C++ or Fortran) to switch to Python. It is now available for Windows XP/Vista/7 (as well as for Ubuntu through the pythonxy-linux project -- note that included software may differs from the Windows version): http://www.pythonxy.com Major changes since v2.1.17: * Python 2.6.3 * Spyder 1.0.0 -- the Scientific PYthon Development EnviRonment, a powerful MATLAB-like development environment introducing exclusive features in the scientific Python community (http://packages.python.org/spyder/) * MinGW 4.4.0 -- including gcc 4.4.0 and gfortran * Pydev 1.5.0 -- now including the powerful code analysis features of Pydev Extensions (formerly available as a commercial extension to the free Pydev plugin) * Enthought Tool Suite 3.3.0 * PyQt 4.5.4 and PyQwt 5.2.0 * VTK 5.4.2 * ITK 3.16 -- Built for Python 2.6 thanks to the help of Charl Botha, DeVIDE (Delft Visualisation and Image processing Development Environment) Complete release notes: http://www.pythonxy.com/download.php - Pierre From contact at pythonxy.com Sun Oct 18 14:11:27 2009 From: contact at pythonxy.com (Pierre Raybaut) Date: Sun, 18 Oct 2009 20:11:27 +0200 Subject: [SciPy-User] [ANN] Spyder v1.0.0 released Message-ID: <4ADB5A4F.7010504@pythonxy.com> Hi all, I'm pleased to announce here that Spyder version 1.0.0 has been released: http://packages.python.org/spyder Previously known as Pydee, Spyder (Scientific PYthon Development EnviRonment) is a free open-source Python development environment providing MATLAB-like features in a simple and light-weighted software, available for Windows XP/Vista/7, GNU/Linux and MacOS X: * advanced code editing features (code analysis, ...) * interactive console with MATLAB-like workpace (with GUI-based list, dictionary, tuple, text and array editors -- screenshots: http://packages.python.org/spyder/console.html#the-workspace) and integrated matplotlib figures * external console to open an interpreter or run a script in a separate process (with a global variable explorer providing the same features as the interactive console's workspace) * code analysis with pyflakes and pylint * search in files features * documentation viewer: automatically retrieves docstrings or source code of the function/class called in the interactive/external console * integrated file/directories explorer * MATLAB-like path management ...and more! Spyder is part of spyderlib, a Python module based on PyQt4 and QScintilla2 which provides powerful console-related PyQt4 widgets. I would like to thanks here all the Spyder users and especially the beta testers and contributors: without them, Spyder wouldn't be as stable, easy-to-use and full-featured as it is. - Pierre From schlesin at cshl.edu Sun Oct 18 14:18:24 2009 From: schlesin at cshl.edu (Felix Schlesinger) Date: Sun, 18 Oct 2009 14:18:24 -0400 Subject: [SciPy-User] Multiprocessing and shared memory Message-ID: <7317d50c0910181118g7c9ef8abq319f059ea786b752@mail.gmail.com> Hi, I have been working on an application using scipy that solves a highly parallel problem. To avoid the GIL in python I used to multiprocessing package. The main issue I ran into is shared memory. All workers share (read-only) access to a single large numpy array. This should be simple, but implemented naively (passing the array as a function argument to the worker process) will eventually create copies of the whole array in memory (I guess when a process writes to the array to change the reference count, triggering the UNIX copy on write mechanism. To avoid this I saw two mechanisms: 1. Using multiprocessing.Array and passing it to numpy.frombuffer (see http://groups.google.com/group/comp.lang.python/browse_thread/thread/79fcf022b01b7fc3) This has the disadvantage to messing with the ctypes to numpy conversion and generally looks clumsy. 2. Using numpy.memmap. (http://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html?highlight=mapped) This has the disadvantage that I need to create file descriptors, keep track of them and make sure that the are closed at the right moment (when I tried to get It to work implicitly, I ran into memory leaks, I think due to the files not being closed when worker processes terminate.). Is there a third way (ideally passing a simple numpy.array and ensuring that the workers never write to the object and hence rely on simple Unix shared memory (copy on write)? If not, which of the two ways above is preferred and are there some tricks to make it work robustly? On an somewhat unrelated note: I read that parts of numpy internaly use multithreading to avoid the global interpreter lock. Which parts are that and how is it triggered? Specifically is there a way to run numerical expressions on large arrays in parallel (each thread working on a part of the array)? I am doing things like exp(special.gammaln(arr1 * x) - arr2) Felix From sturla at molden.no Sun Oct 18 14:38:24 2009 From: sturla at molden.no (Sturla Molden) Date: Sun, 18 Oct 2009 20:38:24 +0200 Subject: [SciPy-User] Multiprocessing and shared memory In-Reply-To: <7317d50c0910181118g7c9ef8abq319f059ea786b752@mail.gmail.com> References: <7317d50c0910181118g7c9ef8abq319f059ea786b752@mail.gmail.com> Message-ID: <4ADB60A0.6090300@molden.no> Felix Schlesinger skrev: > I have been working on an application using scipy that solves a highly > parallel problem. To avoid the GIL in python I used to multiprocessing > package. The main issue I ran into is shared memory. Ga?l Varoquaux and I did some work on that some months ago. It's not as trivial as it seems, but we have a working solution. http://folk.uio.no/sturlamo/python/sharedmem-feb13-2009.zip Basically it uses named shared memory (Sys V IPC on Unix) as buffer. The ndarray is pickled by its kernel name, the buffer is not copied. Thus you can quickly communicate shared memory ndarrays between processes (using multiprocessing.Queue). Note that there is a pesky memory we could get rid of on Unix: It stems from multiprocessing using os._exit instead of sys.exit to terminate worker processes, preventing any clean-up code from executing. The bug is in multiprocessing, not our code. Windows refcounts shared memory segments in the kernel and is not affected. Change any occurrence of os._exit in multiprocessing to sys.exit and it will work just fine. Well, it's not quite up to date for all 64 bits systems. I'll fix that some day. Sturla Molden From sturla at molden.no Sun Oct 18 14:57:03 2009 From: sturla at molden.no (Sturla Molden) Date: Sun, 18 Oct 2009 20:57:03 +0200 Subject: [SciPy-User] Multiprocessing and shared memory In-Reply-To: <7317d50c0910181118g7c9ef8abq319f059ea786b752@mail.gmail.com> References: <7317d50c0910181118g7c9ef8abq319f059ea786b752@mail.gmail.com> Message-ID: <4ADB64FF.3010504@molden.no> Felix Schlesinger skrev: > 1. Using multiprocessing.Array and passing it to numpy.frombuffer (see > http://groups.google.com/group/comp.lang.python/browse_thread/thread/79fcf022b01b7fc3) > This has the disadvantage to messing with the ctypes to numpy > conversion and generally looks clumsy. > multiprocessing.Array cannot be communicated between processes except at fork. Passing it though multiprocessing.Queue will fail. You must preallocate all shared memory in advance of instantiating multiprocessing.Process. > 2. Using numpy.memmap. > (http://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html?highlight=mapped) > This has the disadvantage that I need to create file descriptors, keep > track of them and make sure that the are closed at the right moment > numpy.memmap uses BSD memap, not System V IPC. That means the shared segment has no name, so it must be created in the parent prior to forking. > (when I tried to get It to work implicitly, I ran into memory leaks, I > think due to the files not being closed when worker processes > terminate.). > It is probably due to multiprocessing using os._exit instead of sys.exit. Clean-up code is never executed. You must manually close any file handle in the worker process. Also, it is pickled by copying the buffer content, so if you pass it to multiprocessing.Queue, the child gets a private copy instead of the shared-memory array. > I read that parts of numpy internaly use multithreading to avoid the > global interpreter lock. Which parts are that and how is it triggered? > Specifically is there a way to run numerical expressions on large > arrays in parallel (each thread working on a part of the array)? I am > doing things like > exp(special.gammaln(arr1 * x) - arr2) > There is a multicore branch of numpy. I have never used it. Intel MKL has multicore support. You can build NumPy against it. At least LAPACK, BLAS (and possibly FFT) should use multiple cores. Also note that you can use Cython with normal Python threads, and release the GIL when working with the ndarrays in Cython. Cython has a special syntax for numpy arrays. This is what I currently do, and why I more or less lost my interest in shared memory. The GIL is only a problem if you don't release it. But in Cython you can just do: with nogil: OpenMP is nice if you can put some bottlenecks in C or Fortran (ctypes, f2py or Cython). S.M. From sturla at molden.no Sun Oct 18 14:58:23 2009 From: sturla at molden.no (Sturla Molden) Date: Sun, 18 Oct 2009 20:58:23 +0200 Subject: [SciPy-User] Multiprocessing and shared memory In-Reply-To: <4ADB60A0.6090300@molden.no> References: <7317d50c0910181118g7c9ef8abq319f059ea786b752@mail.gmail.com> <4ADB60A0.6090300@molden.no> Message-ID: <4ADB654F.7080500@molden.no> Sturla Molden skrev: > > Basically it uses named shared memory (Sys V IPC on Unix) as buffer. The > ndarray is pickled by its kernel name, the buffer is not copied. Thus > you can quickly communicate shared memory ndarrays between processes > (using multiprocessing.Queue). > > Note that there is a pesky memory we "memory leak" From robert.kern at gmail.com Sun Oct 18 15:31:55 2009 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 18 Oct 2009 14:31:55 -0500 Subject: [SciPy-User] idiom for iterators, expr(T) if isscalar(T) else array([expr(t) for t in T]) In-Reply-To: <0eb660e1-f1b7-4210-97a8-db79e9229abe@p4g2000yqm.googlegroups.com> References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> <200910141402.52413.yosefmel@post.tau.ac.il> <1cd32cbb0910150856w6d23aa33lf51c113565a47ff7@mail.gmail.com> <1cd32cbb0910150913k60843d4ds7115d7bb0d9d3bdc@mail.gmail.com> <1cd32cbb0910151030t1a526ffv57a4c72df3d0e510@mail.gmail.com> <663bd01e-9ca6-4df6-9b4d-cba3b65cdb0a@z2g2000yqm.googlegroups.com> <3d375d730910161217y49c7ddd5p4e2d98bd3295af78@mail.gmail.com> <0eb660e1-f1b7-4210-97a8-db79e9229abe@p4g2000yqm.googlegroups.com> Message-ID: <3d375d730910181231v79b262b8l41c863ef83e9b01@mail.gmail.com> On Sun, Oct 18, 2009 at 08:13, denis wrote: > On Oct 16, 9:17?pm, Robert Kern wrote: >> Using StackOverflow itself would >> have more permanence until the scipy.org site is ready. > > How about stackoverflow for daily questions, > and advice.mechanicalkern / a corner of scipy for microtutorials, 1-2 > pages, > starting with an a "wanted: microtutorials on ..." page for requests ? > > Or, just leave the field to stackoverflow ? > There's an expert discussion on Scipy Doc Project / scipy foundation > 31 July > http://permalink.gmane.org/gmane.comp.python.scientific.devel/11384 That was precisely the discussion that led me to put up advice.mechanicalkern.com. The point of that conversation was that the StackOverflow concept was a good one, not that the StackOverflow site itself is better place than using a clone for just for the scientific Python community. The problem with using StackOverflow itself for our purposes is that it there is no single tag that can be used to consolidate the relevant questions. I think we would benefit from a more tightly focused community. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robince at gmail.com Sun Oct 18 18:20:01 2009 From: robince at gmail.com (Robin) Date: Sun, 18 Oct 2009 23:20:01 +0100 Subject: [SciPy-User] Multiprocessing and shared memory In-Reply-To: <7317d50c0910181118g7c9ef8abq319f059ea786b752@mail.gmail.com> References: <7317d50c0910181118g7c9ef8abq319f059ea786b752@mail.gmail.com> Message-ID: <2d5132a50910181520x340fb210x43fab451e2264e22@mail.gmail.com> On Sun, Oct 18, 2009 at 7:18 PM, Felix Schlesinger wrote: > I have been working on an application using scipy that solves a highly > parallel problem. To avoid the GIL in python I used to multiprocessing > package. The main issue I ran into is shared memory. All workers share > (read-only) access to a single large numpy array. I'm probably wrong - but if it is really only read only access you need to an array can't you just put it as a module variable before the fork - then all the workers can access it and as long as they don't touch it it shouldn't make a copy. ie something like import mymodule mymodule.data = data_array p = Pool(8) or multiprocessing fork then each process can access mymodule.data? I don't see why the reference count would have to change if you leave the reference in a module. Cheers Robin From sturla at molden.no Sun Oct 18 19:16:43 2009 From: sturla at molden.no (Sturla Molden) Date: Mon, 19 Oct 2009 01:16:43 +0200 Subject: [SciPy-User] Multiprocessing and shared memory In-Reply-To: <2d5132a50910181520x340fb210x43fab451e2264e22@mail.gmail.com> References: <7317d50c0910181118g7c9ef8abq319f059ea786b752@mail.gmail.com> <2d5132a50910181520x340fb210x43fab451e2264e22@mail.gmail.com> Message-ID: <4ADBA1DB.30709@molden.no> Robin skrev: > I'm probably wrong - but if it is really only read only access you > need to an array can't you just put it as a module variable before the > fork - then all the workers can access it and as long as they don't > touch it it shouldn't make a copy. > > If you have a copy-on-write optimized fork (e.g. modern linux kernels), yes, pages that are never written to are never copied. S.M. From ndbecker2 at gmail.com Sun Oct 18 20:04:55 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Sun, 18 Oct 2009 20:04:55 -0400 Subject: [SciPy-User] Multiprocessing and shared memory References: <7317d50c0910181118g7c9ef8abq319f059ea786b752@mail.gmail.com> <4ADB60A0.6090300@molden.no> Message-ID: Sturla Molden wrote: > Felix Schlesinger skrev: >> I have been working on an application using scipy that solves a highly >> parallel problem. To avoid the GIL in python I used to multiprocessing >> package. The main issue I ran into is shared memory. > > Ga?l Varoquaux and I did some work on that some months ago. It's not as > trivial as it seems, but we have a working solution. > > http://folk.uio.no/sturlamo/python/sharedmem-feb13-2009.zip > > Basically it uses named shared memory (Sys V IPC on Unix) as buffer. The > ndarray is pickled by its kernel name, the buffer is not copied. Thus > you can quickly communicate shared memory ndarrays between processes > (using multiprocessing.Queue). > Wouldn't posix ipc be nicer than sysv? From schlesin at cshl.edu Sun Oct 18 21:04:18 2009 From: schlesin at cshl.edu (Felix Schlesinger) Date: Sun, 18 Oct 2009 21:04:18 -0400 Subject: [SciPy-User] Multiprocessing and shared memory Message-ID: <7317d50c0910181804h2575df7dna122c2f00ff4e09a@mail.gmail.com> > Felix Schlesinger skrev: >> I have been working on an application using scipy that solves a highly >> parallel problem. To avoid the GIL in python I used to multiprocessing >> package. The main issue I ran into is shared memory. > > http://folk.uio.no/sturlamo/python/sharedmem-feb13-2009.zip > > Basically it uses named shared memory (Sys V IPC on Unix) as buffer. The > ndarray is pickled by its kernel name, the buffer is not copied. Thus > you can quickly communicate shared memory ndarrays between processes > (using multiprocessing.Queue). > numpy.memmap uses BSD memap, not System V IPC. That means the shared > segment has no name, so it must be created in the parent prior to forking. I am fine with passing the shared array at fork, so this is not much of an issue, but I am not sure I follow anyway: scipy.memmap is constructed based on an open file handle. Couldn't I just pass that filehandle along a queue if needed? Again, not really what I am worrying about. The memory leaks are much more serious. I am not quite sure calling sys.exit is safe either. It might affect module level objects in the main process (buffers, etc.). The details are tricky as always. > There is a multicore branch of numpy. I have never used it. I'll look into that, but I did not find much documentation. > Also note that you can use Cython with normal Python threads, and > release the GIL when working with the ndarrays in Cython. Cython has a > special syntax for numpy arrays. This is what I currently do, and why I > more or less lost my interest in shared memory. The GIL is only a > problem if you don't release it. But in Cython you can just do: > > with nogil: > ? ? Well this only works if the parallel operation does not touch any python objects, right? Otherwise it would reenter the interpreter without the GIL and not be thread-safe anymore. In other words the whole algorithm would have to be moved into cython. Is this approach stable in your experience? It sounds like it would be easy to create race-conditions to me. Felix From schlesin at cshl.edu Sun Oct 18 21:11:50 2009 From: schlesin at cshl.edu (Felix Schlesinger) Date: Sun, 18 Oct 2009 21:11:50 -0400 Subject: [SciPy-User] Multiprocessing and shared memory Message-ID: <7317d50c0910181811o441aa494i29a16ab26d1104e8@mail.gmail.com> > Robin skrev: >> I'm probably wrong - but if it is really only read only access you >> need to an array can't you just put it as a module variable before the >> fork - then all the workers can access it and as long as they don't >> touch it it shouldn't make a copy. >> >> > If you have a copy-on-write optimized fork (e.g. modern linux kernels), > yes, pages that are never written to are never copied. That does work if one is careful never to create any new reference to the shared array to or modify it in any other implicit way in the worker process. The problem is that a modification will not cause an error, but simply a copy (i.e. silent memory leak). In summary it seems to be that the memmap approach is fine for shared data already available at fork, but one has to close the underlying filehandle explicitly after all workers have quit. Multicore numpy sounds interesting too. Does anyone know what the state of that is? Or if available the Intel MKL (but that is neither open nor free) and at that point its hard to know what parts will use parallel processing. Felix From schlesin at cshl.edu Sun Oct 18 21:23:45 2009 From: schlesin at cshl.edu (Felix Schlesinger) Date: Sun, 18 Oct 2009 21:23:45 -0400 Subject: [SciPy-User] scipy.weave.blitz bug using functions Message-ID: <7317d50c0910181823m3e7a9925h582e7733c00989ef@mail.gmail.com> Hi, is this a bug or am I doing something wrong? import scipy arr = scipy.random.rand(1000000) res = scipy.zeros_like(arr) blitz("""res = sin(2*sin(arr+1))""") --------------------------------------------------------------------------- NameError Traceback (most recent call last) /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/blitz_tools.pyc in blitz(expr, local_dict, global_dict, check_size, verbose, **kw) 32 # of time. It also can cause core-dumps if the sizes of the inputs 33 # aren't compatible. ---> 34 if check_size and not size_check.check_expr(expr,local_dict,global_dict): 35 raise 'inputs failed to pass size check.' 36 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/weave/size_check.pyc in check_expr(expr, local_vars, global_vars) 49 if isnumeric(val): 50 values[var] = val ---> 51 exec(expr,values) 52 try: 53 exec(expr,values) NameError: name 'sin' is not defined --------------------------------------------------------------------------------------------------------- If I do blitz("""res = sin(2*sin(arr+1))""", check_size=0) it works. Seems to me the check_size code tries to check the size of the array 'sin', instead of recognizing it as a function. Felix From sturla at molden.no Sun Oct 18 22:41:03 2009 From: sturla at molden.no (Sturla Molden) Date: Mon, 19 Oct 2009 04:41:03 +0200 Subject: [SciPy-User] Multiprocessing and shared memory In-Reply-To: References: <7317d50c0910181118g7c9ef8abq319f059ea786b752@mail.gmail.com> <4ADB60A0.6090300@molden.no> Message-ID: <4ADBD1BF.8020605@molden.no> Neal Becker skrev > Wouldn't posix ipc be nicer than sysv? > No. We tried and failed. It had to do with refcounting the shared segment. Posix ipc doesn't report all we need to know, Sys V does. Sturla From sturla at molden.no Sun Oct 18 23:08:47 2009 From: sturla at molden.no (Sturla Molden) Date: Mon, 19 Oct 2009 05:08:47 +0200 Subject: [SciPy-User] Multiprocessing and shared memory In-Reply-To: <7317d50c0910181804h2575df7dna122c2f00ff4e09a@mail.gmail.com> References: <7317d50c0910181804h2575df7dna122c2f00ff4e09a@mail.gmail.com> Message-ID: <4ADBD83F.9080105@molden.no> Felix Schlesinger skrev: > >> with nogil: >> >> > > Well this only works if the parallel operation does not touch any > python objects, right? Otherwise it would reenter the interpreter > without the GIL and not be thread-safe anymore. Yes and no. Cython will tell you if the GIL is required (you'll get an compilation error). You can use numpy ndarrays in nogil blocks. import numpy as np cimport numpy as np def foobar(): cdef int n cdef double rvalue = 0.1 cdef np.ndarray[double, ndim=1, mode='c'] array array = np.zeros(10) with nogil: for n from 0 <= n < 10: array[n] = rvalue http://wiki.cython.org/tutorials/numpy > In other words the > whole algorithm would have to be moved into cython. Only the worst bottlenecks. > Is this approach > stable in your experience? Cython is a stabile approach. It is used by SciPy and Sage, and many other projects. It will probably be used to port NumPy to Python 3. It's not a toy. You can prototype your code in Python. When you are done, you can add in some type declarations here and there, compile, and get the same performance as C or C++. > It sounds like it would be easy to create > race-conditions to me. In my experience it is easier to get race conditions when using threads in C, C++ or Java. Anything outside nogil blocks is serialized with Cython. Since Cython is compiled, there are no thread switch in Cython-land util you release the GIL manually. Anything not declared nogil becomes thread-safe by default. Also: You can run closures as threads in Python. Just put the loops you want parallelized in a closure, and you have the same more or less the same constructs as used by OpenMP or Apple's GCD. From sturla at molden.no Sun Oct 18 23:17:27 2009 From: sturla at molden.no (Sturla Molden) Date: Mon, 19 Oct 2009 05:17:27 +0200 Subject: [SciPy-User] Multiprocessing and shared memory In-Reply-To: <7317d50c0910181811o441aa494i29a16ab26d1104e8@mail.gmail.com> References: <7317d50c0910181811o441aa494i29a16ab26d1104e8@mail.gmail.com> Message-ID: <4ADBDA47.8010107@molden.no> Felix Schlesinger skrev: > That does work if one is careful never to create any new reference to > the shared array to or modify it in any other implicit way in the > worker process. No no no... Only the pages (blocks of 4096 bytes) written to are copied. If you don't write to the buffer, nothing it copied. You don't write to the buffer of an ndarray by creating new references to it. > The problem is that a modification will not cause an > error, but simply a copy (i.e. silent memory leak). > > There is no memory leak here. From schlesin at cshl.edu Mon Oct 19 00:10:03 2009 From: schlesin at cshl.edu (Felix Schlesinger) Date: Mon, 19 Oct 2009 04:10:03 +0000 (UTC) Subject: [SciPy-User] Multiprocessing and shared memory References: <7317d50c0910181811o441aa494i29a16ab26d1104e8@mail.gmail.com> <4ADBDA47.8010107@molden.no> Message-ID: Sturla Molden molden.no> writes: > > Felix Schlesinger skrev: > > That does work if one is careful never to create any new reference to > > the shared array to or modify it in any other implicit way in the > > worker process. > No no no... > > Only the pages (blocks of 4096 bytes) written to are copied. If you > don't write to the buffer, nothing it copied. > > You don't write to the buffer of an ndarray by creating new references > to it. Now that you say it, I agree, that really show be true (unless the kernel does some magic 'optimization' here and gets it very wrong). However my program definitly made a copy of the whole array at some (at first glance unpredictable) point. I'll go back and check. Maybe it had to do with the fact that I was passing the array as a parameter to a subfunction within the worker, but I don't see how right now. Thanks for the hint. > > The problem is that a modification will not cause an > > error, but simply a copy (i.e. silent memory leak). > > > > > There is no memory leak here. Memory leak is the wrong word, since there is still a valid reference to the memory, but the effect was the same for me. The program gradually consumed more and more memory until it ran out (I think due to pages being copied on write). Felix From sturla at molden.no Mon Oct 19 00:50:12 2009 From: sturla at molden.no (Sturla Molden) Date: Mon, 19 Oct 2009 06:50:12 +0200 Subject: [SciPy-User] Multiprocessing and shared memory In-Reply-To: References: <7317d50c0910181811o441aa494i29a16ab26d1104e8@mail.gmail.com> <4ADBDA47.8010107@molden.no> Message-ID: <4ADBF004.9020803@molden.no> Felix Schlesinger skrev: > Now that you say it, I agree, that really show be true (unless the kernel does > some magic 'optimization' here and gets it very wrong). > It's not the kernel that actually does the copy-on-write. It's implemented in hardware (the paging memory management unit). A page (4k) will be copied if it's marked for copy on write. S.M. From dwf at cs.toronto.edu Mon Oct 19 01:11:11 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 19 Oct 2009 01:11:11 -0400 Subject: [SciPy-User] idiom for iterators, expr(T) if isscalar(T) else array([expr(t) for t in T]) In-Reply-To: <3d375d730910181231v79b262b8l41c863ef83e9b01@mail.gmail.com> References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> <200910141402.52413.yosefmel@post.tau.ac.il> <1cd32cbb0910150856w6d23aa33lf51c113565a47ff7@mail.gmail.com> <1cd32cbb0910150913k60843d4ds7115d7bb0d9d3bdc@mail.gmail.com> <1cd32cbb0910151030t1a526ffv57a4c72df3d0e510@mail.gmail.com> <663bd01e-9ca6-4df6-9b4d-cba3b65cdb0a@z2g2000yqm.googlegroups.com> <3d375d730910161217y49c7ddd5p4e2d98bd3295af78@mail.gmail.com> <0eb660e1-f1b7-4210-97a8-db79e9229abe@p4g2000yqm.googlegroups.com> <3d375d730910181231v79b262b8l41c863ef83e9b01@mail.gmail.com> Message-ID: <7B12ADB0-D56F-4E88-AC35-BF5FFE786124@cs.toronto.edu> On 18-Oct-09, at 3:31 PM, Robert Kern wrote: > The problem with using StackOverflow itself for our > purposes is that it there is no single tag that can be used to > consolidate the relevant questions. I think we would benefit from a > more tightly focused community. One of the problems I see with StackOverflow is that there are a lot of wrong answers peddled around that seem right on the surface, but are actually incomplete, only work in very specific circumstances, etc. And because there's no single tag, it's hard for knowledgeable people to track those questions that are relevant and correct any errors. Robert, will you have any time to join us for the sprint? Jarrod and I (and hopefully more) will be working on the sphinxified new site but I'd be glad to devote some cycles to moving advice over to Solace. David From denis-bz-gg at t-online.de Mon Oct 19 07:40:48 2009 From: denis-bz-gg at t-online.de (denis) Date: Mon, 19 Oct 2009 04:40:48 -0700 (PDT) Subject: [SciPy-User] idiom for iterators, expr(T) if isscalar(T) else array([expr(t) for t in T]) In-Reply-To: <7B12ADB0-D56F-4E88-AC35-BF5FFE786124@cs.toronto.edu> References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> <200910141402.52413.yosefmel@post.tau.ac.il> <1cd32cbb0910150856w6d23aa33lf51c113565a47ff7@mail.gmail.com> <1cd32cbb0910150913k60843d4ds7115d7bb0d9d3bdc@mail.gmail.com> <1cd32cbb0910151030t1a526ffv57a4c72df3d0e510@mail.gmail.com> <663bd01e-9ca6-4df6-9b4d-cba3b65cdb0a@z2g2000yqm.googlegroups.com> <3d375d730910161217y49c7ddd5p4e2d98bd3295af78@mail.gmail.com> <0eb660e1-f1b7-4210-97a8-db79e9229abe@p4g2000yqm.googlegroups.com> <3d375d730910181231v79b262b8l41c863ef83e9b01@mail.gmail.com> <7B12ADB0-D56F-4E88-AC35-BF5FFE786124@cs.toronto.edu> Message-ID: <624d3078-b1d5-4f01-be4b-686ab68bb8db@r5g2000yqb.googlegroups.com> On Oct 19, 7:11?am, David Warde-Farley wrote: > On 18-Oct-09, at 3:31 PM, Robert Kern wrote: > > > The problem with using StackOverflow itself for our > > purposes is that it there is no single tag that can be used to > > consolidate the relevant questions. I think we would benefit from a > > more tightly focused community. ? Why not just tell people "use http://stackoverflow.com/questions/tagged/scipy; that will help the community to improve and collect answers." Further tags from: area: optimize linalg interpol fft ... ~ the list in Wikipedia Scipy question level: newbie ... review level: ... would be nice, but I don't see much tag-the-world in scipy.org either -- indexing, tagging, raising the quality of any big web site is really tough. In short, I (outsider) see a hard choice: - try to guide stackoverflow a bit - or ignore it, come up with something better ... a year later. cheers -- denis ... one of these things where a lot of people feel like, "If only the rest of the world was educated enough to understand what this is about, they'd be better off." And I actually kind of agree with that. The problem is that most of the world could actually care less. -- James Gosling From kmichael.aye at googlemail.com Mon Oct 19 12:15:06 2009 From: kmichael.aye at googlemail.com (Michael Aye) Date: Mon, 19 Oct 2009 09:15:06 -0700 (PDT) Subject: [SciPy-User] [ANN] Python(x,y) 2.6.3.0 released In-Reply-To: <4ADB5A48.1030605@pythonxy.com> References: <4ADB5A48.1030605@pythonxy.com> Message-ID: <92091a3d-4687-46fa-91c2-a9b58501d7fe@a6g2000vbp.googlegroups.com> This is an awesome distribution, but please, please, bring it out for Mac OS X, okay? :) Best regards, Michael On Oct 18, 8:11?pm, Pierre Raybaut wrote: > Hi all, > > I'm quite pleased (and relieved) to announce that Python(x,y) version > 2.6.3.0 has been released. It is the first release based on Python 2.6 > -- note that Python(x,y) version number will now follow the included > Python version (Python(x,y) vX.Y.Z.N will be based on Python vX.Y.Z). > > Python(x,y) is a free Python distribution providing a ready-to-use > scientific development software for numerical computations, data > analysis and data visualization based on Python programming language, Qt > graphical user interfaces (and development framework), Eclipse > integrated development environment and Spyder interactive development > environment. Its purpose is to help scientific programmers used to > interpreted languages (such as MATLAB or IDL) or compiled languages > (C/C++ or Fortran) to switch to Python. > > It is now available for Windows XP/Vista/7 (as well as for Ubuntu > through the pythonxy-linux project -- note that included software may > differs from the Windows version):http://www.pythonxy.com > > Major changes since v2.1.17: > ? ? * Python 2.6.3 > ? ? * Spyder 1.0.0 -- the Scientific PYthon Development EnviRonment, a > powerful MATLAB-like development environment introducing exclusive > features in the scientific Python community > (http://packages.python.org/spyder/) > ? ? * MinGW 4.4.0 -- including gcc 4.4.0 and gfortran > ? ? * Pydev 1.5.0 -- now including the powerful code analysis features > of Pydev Extensions (formerly available as a commercial extension to the > free Pydev plugin) > ? ? * Enthought Tool Suite 3.3.0 > ? ? * PyQt 4.5.4 and PyQwt 5.2.0 > ? ? * VTK 5.4.2 > ? ? * ITK 3.16 -- Built for Python 2.6 thanks to the help of Charl > Botha, DeVIDE (Delft Visualisation and Image processing Development > Environment) > > Complete release notes:http://www.pythonxy.com/download.php > > - Pierre > > _______________________________________________ > SciPy-User mailing list > SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From kmichael.aye at googlemail.com Mon Oct 19 12:17:54 2009 From: kmichael.aye at googlemail.com (Michael Aye) Date: Mon, 19 Oct 2009 09:17:54 -0700 (PDT) Subject: [SciPy-User] Enthought's Traits and Python 2.6 Message-ID: Hi all, the current downloadable Enthought Distribution is running Python 2.5. Is there a way to use the Enthought traits with Python 2.6, as i need 2.6 for its multiprocess library. Best regards, Michael From wangyouzh at gmail.com Mon Oct 19 12:23:15 2009 From: wangyouzh at gmail.com (=?GB2312?B?zfXT0dbS?=) Date: Tue, 20 Oct 2009 00:23:15 +0800 Subject: [SciPy-User] Why scipy supply a different result comparing with matlab when doing multi-linear regression analysis In-Reply-To: References: Message-ID: Hello! I have used scipy to solve a multi-linear regression problem, I used the function lstsq in scipy.linalg. But, when I compared the result with using matlab, I found the function regress in matlab gave me a very different result. Why they are inconsonant, and which result should I trust? Best Wishes! Fred -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Oct 19 12:23:43 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 19 Oct 2009 11:23:43 -0500 Subject: [SciPy-User] Enthought's Traits and Python 2.6 In-Reply-To: References: Message-ID: <3d375d730910190923q7514b7f8xc0797ea84589b7a@mail.gmail.com> On Mon, Oct 19, 2009 at 11:17, Michael Aye wrote: > Hi all, > the current downloadable Enthought Distribution is running Python 2.5. > Is there a way to use the Enthought traits with Python 2.6, as i need > 2.6 for its multiprocess library. You will want to ask questions about the Enthought Tool Suite on enthought-dev, not scipy-user. https://mail.enthought.com/mailman/listinfo/enthought-dev There are a couple of things to note. The first is that the version of Python used as the basis for the Enthought Python Distribution is unrelated to the versions of Python supported by the Enthought Tool Suite. You should be able to build Traits from source with Python 2.6 just fine. However, it is also worth noting that you do not need Python 2.6 in order to use multiprocessing. It is available as a third party package backported to Python 2.5. In fact, it is bundled with the Enthought Python Distribution. http://pypi.python.org/pypi/multiprocessing -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Oct 19 12:27:30 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 19 Oct 2009 11:27:30 -0500 Subject: [SciPy-User] Why scipy supply a different result comparing with matlab when doing multi-linear regression analysis In-Reply-To: References: Message-ID: <3d375d730910190927w71ebcc82qb1722514126794de@mail.gmail.com> On Mon, Oct 19, 2009 at 11:23, ??? wrote: > Hello! > > I have used scipy to solve a multi-linear regression problem, I used the > function lstsq in scipy.linalg. > > But, when I compared the result with using matlab, I found the function > regress in matlab gave me a very different result. > > Why they are inconsonant, and which result should I trust? Matlab's regress() is a higher level function that may have set up the low level linear least squares problem differently than you did when you used scipy.linalg.lstsq(). Without seeing your data and your code, that is really the only clue we can provide you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Mon Oct 19 12:28:38 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 19 Oct 2009 12:28:38 -0400 Subject: [SciPy-User] Why scipy supply a different result comparing with matlab when doing multi-linear regression analysis In-Reply-To: References: Message-ID: <1cd32cbb0910190928p2e54d727t30a17d4f62617959@mail.gmail.com> On Mon, Oct 19, 2009 at 12:23 PM, ??? wrote: > Hello! > > I have used scipy to solve a multi-linear regression problem, I used the > function lstsq in scipy.linalg. > > But, when I compared the result with using matlab, I found the function > regress in matlab gave me a very different result. > > Why they are inconsonant, and which result should I trust? > > Best Wishes! > > Fred They should give the same results, up to numerical precision and non-uniqueness of the solution. Post an example, and we can see where the difference is. Josef > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From wangyouzh at gmail.com Mon Oct 19 12:34:59 2009 From: wangyouzh at gmail.com (=?GB2312?B?zfXT0dbS?=) Date: Tue, 20 Oct 2009 00:34:59 +0800 Subject: [SciPy-User] Why scipy supply a different result comparing with matlab when doing multi-linear regression analysis In-Reply-To: <3d375d730910190927w71ebcc82qb1722514126794de@mail.gmail.com> References: <3d375d730910190927w71ebcc82qb1722514126794de@mail.gmail.com> Message-ID: Hello! The data used in scipy and matlab is as the same. There is an dependent variable Y and 9 independent variables X, each variable has 90100 elements. Each variable has been standardized. In scipy, I used: c,resid,rank,sigma = linalg.lstsq(Y,X) and in matlab: [b,bint]=regress(Y,X). Thank You! Fred 2009/10/20 Robert Kern > On Mon, Oct 19, 2009 at 11:23, ??? wrote: > > Hello! > > > > I have used scipy to solve a multi-linear regression problem, I used the > > function lstsq in scipy.linalg. > > > > But, when I compared the result with using matlab, I found the function > > regress in matlab gave me a very different result. > > > > Why they are inconsonant, and which result should I trust? > > Matlab's regress() is a higher level function that may have set up the > low level linear least squares problem differently than you did when > you used scipy.linalg.lstsq(). Without seeing your data and your code, > that is really the only clue we can provide you. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Oct 19 12:41:14 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 19 Oct 2009 11:41:14 -0500 Subject: [SciPy-User] Why scipy supply a different result comparing with matlab when doing multi-linear regression analysis In-Reply-To: References: <3d375d730910190927w71ebcc82qb1722514126794de@mail.gmail.com> Message-ID: <3d375d730910190941j51d8ec1cj41975a32df17fa28@mail.gmail.com> 2009/10/19 ??? : > Hello! > > The data used in scipy and matlab is as the same. > > There is an dependent variable Y and 9 independent variables X, each > variable has 90100 elements. > > Each variable has been standardized. > > In scipy, I used: c,resid,rank,sigma = linalg.lstsq(Y,X) That's backwards. You need linalg.lstsq(X,Y). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wangyouzh at gmail.com Mon Oct 19 12:44:35 2009 From: wangyouzh at gmail.com (=?GB2312?B?zfXT0dbS?=) Date: Tue, 20 Oct 2009 00:44:35 +0800 Subject: [SciPy-User] Why scipy supply a different result comparing with matlab when doing multi-linear regression analysis In-Reply-To: <3d375d730910190941j51d8ec1cj41975a32df17fa28@mail.gmail.com> References: <3d375d730910190927w71ebcc82qb1722514126794de@mail.gmail.com> <3d375d730910190941j51d8ec1cj41975a32df17fa28@mail.gmail.com> Message-ID: Thank you! I will have a try! 2009/10/20 Robert Kern > 2009/10/19 ??? : > > Hello! > > > > The data used in scipy and matlab is as the same. > > > > There is an dependent variable Y and 9 independent variables X, each > > variable has 90100 elements. > > > > Each variable has been standardized. > > > > In scipy, I used: c,resid,rank,sigma = linalg.lstsq(Y,X) > > That's backwards. You need linalg.lstsq(X,Y). > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wangyouzh at gmail.com Mon Oct 19 12:58:48 2009 From: wangyouzh at gmail.com (=?GB2312?B?zfXT0dbS?=) Date: Tue, 20 Oct 2009 00:58:48 +0800 Subject: [SciPy-User] Why scipy supply a different result comparing with matlab when doing multi-linear regression analysis In-Reply-To: <3d375d730910190941j51d8ec1cj41975a32df17fa28@mail.gmail.com> References: <3d375d730910190927w71ebcc82qb1722514126794de@mail.gmail.com> <3d375d730910190941j51d8ec1cj41975a32df17fa28@mail.gmail.com> Message-ID: Another question, How can I do hypothesis testing using scipy? i.e. how can I test the hypothesis: ci = 0? Best Wishes! Fred 2009/10/20 Robert Kern > 2009/10/19 ??? : > > Hello! > > > > The data used in scipy and matlab is as the same. > > > > There is an dependent variable Y and 9 independent variables X, each > > variable has 90100 elements. > > > > Each variable has been standardized. > > > > In scipy, I used: c,resid,rank,sigma = linalg.lstsq(Y,X) > > That's backwards. You need linalg.lstsq(X,Y). > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Oct 19 13:34:58 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 19 Oct 2009 13:34:58 -0400 Subject: [SciPy-User] Why scipy supply a different result comparing with matlab when doing multi-linear regression analysis In-Reply-To: References: <3d375d730910190927w71ebcc82qb1722514126794de@mail.gmail.com> <3d375d730910190941j51d8ec1cj41975a32df17fa28@mail.gmail.com> Message-ID: <1cd32cbb0910191034y14400cd1ncccdec0264c034a9@mail.gmail.com> 2009/10/19 ??? : > Another question, > > How can I do hypothesis testing using scipy? i.e. how can I test the > hypothesis: ci = 0? t test: http://statsmodels.sourceforge.net/generated/scikits.statsmodels.model.LikelihoodModelResults.t.html#scikits.statsmodels.model.LikelihoodModelResults.t t tests and f tests for more complicated hypothesis http://bazaar.launchpad.net/~scipystats/statsmodels/trunk/annotate/head%3A/scikits/statsmodels/examples/example_ols_tftest.py or I think the OLS class in the scipy cookbook should also have it Josef > > Best Wishes! > > Fred > > > 2009/10/20 Robert Kern >> >> 2009/10/19 ??? : >> > Hello! >> > >> > The data used in scipy and matlab is as the same. >> > >> > There is an dependent variable Y and 9 independent variables X, each >> > variable has 90100 elements. >> > >> > Each variable has been standardized. >> > >> > In scipy, I used: c,resid,rank,sigma = linalg.lstsq(Y,X) >> >> That's backwards. You need linalg.lstsq(X,Y). >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, a harmless >> enigma that is made terrible by our own mad attempt to interpret it as >> though it had an underlying truth." >> -- Umberto Eco >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From wangyouzh at gmail.com Mon Oct 19 13:36:47 2009 From: wangyouzh at gmail.com (=?GB2312?B?zfXT0dbS?=) Date: Tue, 20 Oct 2009 01:36:47 +0800 Subject: [SciPy-User] Why scipy supply a different result comparing with matlab when doing multi-linear regression analysis In-Reply-To: <1cd32cbb0910191034y14400cd1ncccdec0264c034a9@mail.gmail.com> References: <3d375d730910190927w71ebcc82qb1722514126794de@mail.gmail.com> <3d375d730910190941j51d8ec1cj41975a32df17fa28@mail.gmail.com> <1cd32cbb0910191034y14400cd1ncccdec0264c034a9@mail.gmail.com> Message-ID: Thanks very much! 2009/10/20 > 2009/10/19 ??? : > > Another question, > > > > How can I do hypothesis testing using scipy? i.e. how can I test the > > hypothesis: ci = 0? > > t test: > > http://statsmodels.sourceforge.net/generated/scikits.statsmodels.model.LikelihoodModelResults.t.html#scikits.statsmodels.model.LikelihoodModelResults.t > > t tests and f tests for more complicated hypothesis > > > > > http://bazaar.launchpad.net/~scipystats/statsmodels/trunk/annotate/head%3A/scikits/statsmodels/examples/example_ols_tftest.py > > or I think the OLS class in the scipy cookbook should also have it > > Josef > > > > > Best Wishes! > > > > Fred > > > > > > 2009/10/20 Robert Kern > >> > >> 2009/10/19 ??? : > >> > Hello! > >> > > >> > The data used in scipy and matlab is as the same. > >> > > >> > There is an dependent variable Y and 9 independent variables X, each > >> > variable has 90100 elements. > >> > > >> > Each variable has been standardized. > >> > > >> > In scipy, I used: c,resid,rank,sigma = linalg.lstsq(Y,X) > >> > >> That's backwards. You need linalg.lstsq(X,Y). > >> > >> -- > >> Robert Kern > >> > >> "I have come to believe that the whole world is an enigma, a harmless > >> enigma that is made terrible by our own mad attempt to interpret it as > >> though it had an underlying truth." > >> -- Umberto Eco > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Mon Oct 19 13:56:01 2009 From: sturla at molden.no (Sturla Molden) Date: Mon, 19 Oct 2009 19:56:01 +0200 Subject: [SciPy-User] Why scipy supply a different result comparing with matlab when doing multi-linear regression analysis In-Reply-To: References: Message-ID: <4ADCA831.5080906@molden.no> ??? skrev: > Hello! > > I have used scipy to solve a multi-linear regression problem, I used > the function lstsq in scipy.linalg. > > But, when I compared the result with using matlab, I found the > function regress in matlab gave me a very different result. With arrays X and Y, multiple linear regression looks like this: import numpy as np import scipy from scipy.linalg import qr, solve x = np.vstack((np.ones(X.shape[1]),X)).T q,r = qr(x, econ=True) b = solve(r, (np.mat(Y) * np.mat(q)).T).ravel() From halverson.peter at yahoo.com Mon Oct 19 17:46:27 2009 From: halverson.peter at yahoo.com (Peter Halverson) Date: Mon, 19 Oct 2009 14:46:27 -0700 (PDT) Subject: [SciPy-User] using scipy.optimize.fmin_slsqp and setting bounds=(None, None) Message-ID: <961987.41200.qm@web45908.mail.sp1.yahoo.com> I'm not sure if this is user error or an actual bug. When I attempt to set my bounds in fmin_slsqp the option bounds =[(-10,10),(0,None)] is not recognized Scipy crashes with IndexError Traceback (most recent call last) C:\Documents and Settings\All Users\Documents\Python\ in () C:\Python25\lib\site-packages\scipy\optimize\slsqp.pyc in fmin_slsqp(func, x0, eqcons, f_eqcons, ieqcons, f_ieqcons, bounds, fprime, fprime_eqcons, fprime_ieqcons, args, iter, acc, iprint, full_output, epsilon) 244 if bounds[i][0] > bounds[i][1]: 245 raise ValueError, \ --> 246 'SLSQP Error: lb > ub in bounds[' + str(i) +'] ' + str(bounds[4]) 247 248 xl = array( [ b[0] for b in bounds ] ) IndexError: list index out of range My code: import numpy as np import scipy as sp from scipy.optimize import fmin_slsqp as fmincon #The purpose of this script is # minimize x0+x1^2 # where a and b are constants # and 00 def fitfun(x): t=x[0]+x[1]**2 return t def confun(x): return (x[0]-x[1]) bnds =[(-10,10),(0,None)] guess=[0.5,0.5] fmincon(fitfun,guess,ieqcons=[confun],bounds=bnds) If this is a bug I will report it to the appropriate places. If its not a bug please help me sort it out. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hazelnusse at gmail.com Mon Oct 19 17:58:57 2009 From: hazelnusse at gmail.com (Luke) Date: Mon, 19 Oct 2009 14:58:57 -0700 Subject: [SciPy-User] making use of args in odeint Message-ID: <99214b470910191458q41f5b3mc14156a3d9c9185@mail.gmail.com> I am integrating some ODE's and want to calculate some extra quantities during numerical integration. I was hoping to use the args option of odeint as a way to parameters which could be 'output' at each time step, but it isn't working the way I would expect. Here is a short example of integrating the equation: dx/dt = -a*x where a is a parameter, and we also want to calculate at every time step: b = t / 2. This is just a silly example, the point is to illustrate that during numerical integration, other quantities besides the state trajectory x(t) are of interest. If we wanted to integrate this from the initial condition x = 1, with a=1, on the interval t=arange(0, 1.2, 0.2), we would expect that the trajectory of b to simply be arange(0, 0.6, 0.1). But instead, b ends up being not on these exact times. from numpy import sin, cos, arange, zeros, array from scipy.integrate import odeint def f(x, t, params): a = params[0] params[1] = t / 2. return -a*x # Intial time, step time, final time t_i = 0.0 t_s = 0.2 t_f = 1.0 # Initial condition xi = 1.0 # Time array t = arange(t_i, t_f + t_s, t_s) n = len(t) a = 1.0 # array for storing b b = zeros(n) params = [a, b[0]] # State trajectory xs = zeros(n) for i, ti in enumerate(t[1:]): x = odeint(f, xi, [t[i], t[i+1]], args=(params,)) xs[i+1] = xi = x[-1] b[i+1] = params[1] #store the output parameters print t print xs print b The output is: t = [ 0. 0.2 0.4 0.6 0.8 1. ] x = [ 0. 0.81873077 0.67032008 0.54881168 0.44932901 0.3678795 ] b = [ 0. 0.10232638 0.20562135 0.30934571 0.41352631 0.51818916] # b is close to the times of [0, 0.1, 0.2, 0.3, 0.4, 0.5], not exact though I would then expect that b would just have the same values as the array t / 2, but instead the time values are not on the points I specified. My guess is that this has to do with the internal step size selection and that the last time that the right hand sides is evaluated is at a time point dictacted by the internal time step, rather than the time step that I would like it to be, namely, the same points as in the array of time points. One solution I can see is to just store different time array to use with the plotting of all the output parameters, but I was wondering if there was an alternative way that would just let me ensure that the output params gets assigned values corresponding with time values at the final time of the interval I pass to odeint. Does anybody know if there is a way to do this? The reason for even taking this approach is that there are lot of repetitious calculations that are done in more complicated systems that I would like to avoid doing in a separate step after numerical integration. These calculations are being done regardless In this example this isn't apparent, but in other systems, the calculations are more significant and so there is some computational savings to be had from this approach. ~Luke -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Oct 19 19:22:45 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 19 Oct 2009 18:22:45 -0500 Subject: [SciPy-User] idiom for iterators, expr(T) if isscalar(T) else array([expr(t) for t in T]) In-Reply-To: <7B12ADB0-D56F-4E88-AC35-BF5FFE786124@cs.toronto.edu> References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> <1cd32cbb0910150856w6d23aa33lf51c113565a47ff7@mail.gmail.com> <1cd32cbb0910150913k60843d4ds7115d7bb0d9d3bdc@mail.gmail.com> <1cd32cbb0910151030t1a526ffv57a4c72df3d0e510@mail.gmail.com> <663bd01e-9ca6-4df6-9b4d-cba3b65cdb0a@z2g2000yqm.googlegroups.com> <3d375d730910161217y49c7ddd5p4e2d98bd3295af78@mail.gmail.com> <0eb660e1-f1b7-4210-97a8-db79e9229abe@p4g2000yqm.googlegroups.com> <3d375d730910181231v79b262b8l41c863ef83e9b01@mail.gmail.com> <7B12ADB0-D56F-4E88-AC35-BF5FFE786124@cs.toronto.edu> Message-ID: <3d375d730910191622x87aeec3j1b3c9e48df1fb391@mail.gmail.com> On Mon, Oct 19, 2009 at 00:11, David Warde-Farley wrote: > On 18-Oct-09, at 3:31 PM, Robert Kern wrote: > >> The problem with using StackOverflow itself for our >> purposes is that it there is no single tag that can be used to >> consolidate the relevant questions. I think we would benefit from a >> more tightly focused community. > > One of the problems I see with StackOverflow is that there are a lot > of wrong answers peddled around that seem right on the surface, but > are actually incomplete, only work in very specific circumstances, > etc. And because there's no single tag, it's hard for knowledgeable > people to track those questions that are relevant and correct any > errors. I'm hoping that a site on scipy.org will get more people from this community interested in helping out. Obviously there are quite a few helpful people on the mailing lists already. The idea of using a StackOverflow clone would be to make that kind of help more permanent and easier to do. StackOverflow itself, while a great site in general, is a little hard for smaller communities to use. It's great if you have Django questions and answers, though. :-) > Robert, will you have any time to join us for the sprint? Jarrod and I > (and hopefully more) will be working on the sphinxified new site but > I'd be glad to devote some cycles to moving advice over to Solace. When is the sprint? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Oct 19 19:26:24 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 19 Oct 2009 18:26:24 -0500 Subject: [SciPy-User] idiom for iterators, expr(T) if isscalar(T) else array([expr(t) for t in T]) In-Reply-To: <624d3078-b1d5-4f01-be4b-686ab68bb8db@r5g2000yqb.googlegroups.com> References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> <1cd32cbb0910150913k60843d4ds7115d7bb0d9d3bdc@mail.gmail.com> <1cd32cbb0910151030t1a526ffv57a4c72df3d0e510@mail.gmail.com> <663bd01e-9ca6-4df6-9b4d-cba3b65cdb0a@z2g2000yqm.googlegroups.com> <3d375d730910161217y49c7ddd5p4e2d98bd3295af78@mail.gmail.com> <0eb660e1-f1b7-4210-97a8-db79e9229abe@p4g2000yqm.googlegroups.com> <3d375d730910181231v79b262b8l41c863ef83e9b01@mail.gmail.com> <7B12ADB0-D56F-4E88-AC35-BF5FFE786124@cs.toronto.edu> <624d3078-b1d5-4f01-be4b-686ab68bb8db@r5g2000yqb.googlegroups.com> Message-ID: <3d375d730910191626r43d11865i9279696568561f17@mail.gmail.com> On Mon, Oct 19, 2009 at 06:40, denis wrote: > > On Oct 19, 7:11?am, David Warde-Farley wrote: >> On 18-Oct-09, at 3:31 PM, Robert Kern wrote: >> >> > The problem with using StackOverflow itself for our >> > purposes is that it there is no single tag that can be used to >> > consolidate the relevant questions. I think we would benefit from a >> > more tightly focused community. > > ? Why not just tell people > "use http://stackoverflow.com/questions/tagged/scipy; > that will help the community to improve and collect answers." Because "scipy" is an inappropriate tag for matplotlib questions. Or numpy questions. Or IPython questions. Or SAGE questions. > Further tags from: > ?area: optimize linalg interpol fft ... ~ the list in Wikipedia Scipy > ?question level: newbie ... > ?review level: ... > would be nice, but I don't see much tag-the-world in scipy.org either > -- > indexing, tagging, raising the quality of any big web site is really > tough. > > In short, I (outsider) see a hard choice: > - try to guide stackoverflow a bit > - or ignore it, come up with something better ... a year later. The clones are really quite nice. We don't have to "come up" with anything. We just need to find time to deploy the software. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rob.clewley at gmail.com Mon Oct 19 20:05:31 2009 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 19 Oct 2009 20:05:31 -0400 Subject: [SciPy-User] making use of args in odeint In-Reply-To: <99214b470910191458q41f5b3mc14156a3d9c9185@mail.gmail.com> References: <99214b470910191458q41f5b3mc14156a3d9c9185@mail.gmail.com> Message-ID: On Mon, Oct 19, 2009 at 5:58 PM, Luke wrote: > I am integrating some ODE's and want to calculate some extra quantities > during numerical integration.? I was hoping to use the args option of odeint > as a way to parameters which could be 'output' at each time step, but it > isn't working the way I would expect.? Here is a short example of > integrating the equation: > dx/dt = -a*x > > where a is a parameter, and we also want to calculate at every time step: > b = t / 2. > odeint may call f for t values beyond those you specify, leaving the last t/2 value to not be what you want it to be, as you suggest. Abusing the params option is not a recommended way to use odeint. Instead, why can't you dump the t value used when b is calculated to a global *list* that will be dynamically allocated? Then you won't need to worry what internal time steps are taken or abuse params. I.e., aux_t = [] aux_b = [] def f(x, t, params): a = params[0] aux_t.append(t) aux_b.append(t / 2.) # or any other function of t, x return -a*x Then at least you have a consistent set of (t, b) pairs after integration. > The reason for even taking this approach is that there are lot of > repetitious calculations that are done in more complicated systems that I > would like to avoid doing in a separate step after numerical integration. > These calculations are being done regardless In this example this isn't > apparent, but in other systems, the calculations are more significant and so > there is some computational savings to be had from this approach. If you really have such a complex problem that you can't just do that (or it's time/memory inefficient) then you'll need to use a more sophisticated ODE solver that allows you to make user-defined functions and "auxiliary" variables that are output along with the state variables. That would be PyDSTool, IMO. You can also require those solvers to pass through predetermined time steps (for the output) and/or use interpolated output so that you'd guarantee your b values are where you want them to be. -Rob From abielr at gmail.com Mon Oct 19 22:41:35 2009 From: abielr at gmail.com (Abiel Reinhart) Date: Mon, 19 Oct 2009 22:41:35 -0400 Subject: [SciPy-User] Using scikits.timeseries with Chaco Message-ID: Is there a way to use the functionality contained in scikits.timeseries.lib.plotlib to extract tick spacing and tick formatting information for a TimeSeries without doing any actual matplotlib graphing? I would like to use the plotlib component to get tick information which I can then pass to a plot I have created with Chaco. Thank you very much. Abiel From dwf at cs.toronto.edu Mon Oct 19 23:32:23 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 19 Oct 2009 23:32:23 -0400 Subject: [SciPy-User] idiom for iterators, expr(T) if isscalar(T) else array([expr(t) for t in T]) In-Reply-To: <3d375d730910191622x87aeec3j1b3c9e48df1fb391@mail.gmail.com> References: <243e0f9a-a098-4ce0-baf3-d4df07f6fcfd@l13g2000yqb.googlegroups.com> <1cd32cbb0910150856w6d23aa33lf51c113565a47ff7@mail.gmail.com> <1cd32cbb0910150913k60843d4ds7115d7bb0d9d3bdc@mail.gmail.com> <1cd32cbb0910151030t1a526ffv57a4c72df3d0e510@mail.gmail.com> <663bd01e-9ca6-4df6-9b4d-cba3b65cdb0a@z2g2000yqm.googlegroups.com> <3d375d730910161217y49c7ddd5p4e2d98bd3295af78@mail.gmail.com> <0eb660e1-f1b7-4210-97a8-db79e9229abe@p4g2000yqm.googlegroups.com> <3d375d730910181231v79b262b8l41c863ef83e9b01@mail.gmail.com> <7B12ADB0-D56F-4E88-AC35-BF5FFE786124@cs.toronto.edu> <3d375d730910191622x87aeec3j1b3c9e48df1fb391@mail.gmail.com> Message-ID: <182ADE31-9568-40FC-A863-C7CF7D273D33@cs.toronto.edu> On 19-Oct-09, at 7:22 PM, Robert Kern wrote: > It's great if you > have Django questions and answers, though. :-) Indeed, and I've found it good for accumulating vim-fu. And there seem to be at least some numpy users (among them Alex Martelli) answering questions. >> Robert, will you have any time to join us for the sprint? Jarrod >> and I >> (and hopefully more) will be working on the sphinxified new site but >> I'd be glad to devote some cycles to moving advice over to Solace. > > When is the sprint? Next weekend - October 24 and 25. David From leon_r_adams at hotmail.com Mon Oct 19 23:44:27 2009 From: leon_r_adams at hotmail.com (Leon Adams) Date: Mon, 19 Oct 2009 20:44:27 -0700 Subject: [SciPy-User] scipy.stats.fit inquiry Message-ID: Hi all, I am using scipy.stats module to perform some distribution fitting. What I cannot seem to get a handle on is how to compare the quality of fit achieved. At this stage the docs does not seem to be quite as useful... As an example, I fit my data using fitExp = st.expon.fit(data) which returns an array [ 0.99999999 1.33310547] How do we access the resulting maximized likelihood, mean square errors ... Also, how would we go about calculating KS tests for the fitted parameters ?? Mainly I am interesting in how good is this fit, and what diagnostics we have available. Thanks in advance _________________________________________________________________ Hotmail: Trusted email with powerful SPAM protection. http://clk.atdmt.com/GBL/go/177141665/direct/01/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Mon Oct 19 23:53:16 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 19 Oct 2009 23:53:16 -0400 Subject: [SciPy-User] scipy.stats.fit inquiry In-Reply-To: References: Message-ID: 2009/10/19 Leon Adams : > I am using scipy.stats module to perform some distribution fitting. What I > cannot seem to get a handle on is how to compare the quality of fit > achieved. At this stage the docs does not seem to be quite as useful... As > an example, I fit my data using > > > fitExp = st.expon.fit(data) > > which returns an array [ 0.99999999? 1.33310547] > > How do we access the resulting maximized likelihood, mean square errors ... > Also, how would we go about calculating KS tests for the fitted parameters > ?? Mainly I am interesting in how good is this fit, and what diagnostics we > have available. I'm not sure what tools we have in scipy, but there's always the everything-looks-like-a-nail approach: fit for the parameters, then use the fitted distribution to generate many data sets and see how many of them are a better fit than yours. We do have a K-S test, which would serve as a reasonable way to answer "how well does this data fit this distribution". The p value you get will be wrong if you obtained the distribution by fitting, but the K-S value will still be a reasonable measure of quality-of-fit (which you can compare to the quality of models fit to generated data sets). The scatter in model parameters obtained by fitting generated data sets will give you an estimate of the uncertainties on the fitted parameters. For smarter approaches, for example Cash statistics, I'm not sure whether scipy has anything more spohisticated, but at least scipy's distributions will give you PDFs you can take negative logs of. Anne From pgmdevlist at gmail.com Mon Oct 19 23:57:18 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 19 Oct 2009 23:57:18 -0400 Subject: [SciPy-User] Using scikits.timeseries with Chaco In-Reply-To: References: Message-ID: On Oct 19, 2009, at 10:41 PM, Abiel Reinhart wrote: > Is there a way to use the functionality contained in > scikits.timeseries.lib.plotlib to extract tick spacing and tick > formatting information for a TimeSeries without doing any actual > matplotlib graphing? I would like to use the plotlib component to get > tick information which I can then pass to a plot I have created with > Chaco. Abiel, Unfortunately, neither Matt nor I have any experience with Chaco. However, it'd be great to have such a component. You shouldn't have any problems adapting the sources to Chaco, but don't hesitate to contact me off-list if you have some questions. Cheers. From robert.kern at gmail.com Tue Oct 20 01:17:40 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 20 Oct 2009 00:17:40 -0500 Subject: [SciPy-User] using scipy.optimize.fmin_slsqp and setting bounds=(None, None) In-Reply-To: <961987.41200.qm@web45908.mail.sp1.yahoo.com> References: <961987.41200.qm@web45908.mail.sp1.yahoo.com> Message-ID: <3d375d730910192217t40b14a43y3d3d57c4af11b8a2@mail.gmail.com> On Mon, Oct 19, 2009 at 16:46, Peter Halverson wrote: > I'm not sure if this is user error or an actual bug. When I attempt to set > my bounds in fmin_slsqp the option bounds =[(-10,10),(0,None)] is not > recognized Scipy crashes with > > IndexError??????????????????????????????? Traceback (most recent call last) > > C:\Documents and Settings\All Users\Documents\Python\ in > () > > C:\Python25\lib\site-packages\scipy\optimize\slsqp.pyc in fmin_slsqp(func, > x0, eqcons, f_eqcons, ieqcons, f_ieqcons, bounds, fprime, fprime_eqcons, > fprime_ieqcons, args, iter, acc, iprint, full_output, epsilon) > ??? 244???????????? if bounds[i][0] > bounds[i][1]: > ??? 245???????????????? raise ValueError, \ > --> 246???????????????? 'SLSQP Error: lb > ub in bounds[' + str(i) +']? ' + > str(bounds[4]) > ??? 247 > ??? 248???? xl = array( [ b[0] for b in bounds ] ) > > IndexError: list index out of range Bug. A couple of bugs, actually. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From roger.herikstad at gmail.com Tue Oct 20 03:02:03 2009 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Tue, 20 Oct 2009 15:02:03 +0800 Subject: [SciPy-User] [ANN] Python(x,y) 2.6.3.0 released In-Reply-To: <92091a3d-4687-46fa-91c2-a9b58501d7fe@a6g2000vbp.googlegroups.com> References: <4ADB5A48.1030605@pythonxy.com> <92091a3d-4687-46fa-91c2-a9b58501d7fe@a6g2000vbp.googlegroups.com> Message-ID: Hi, I second that. I'm trying to convert my matlab-using colleagues to python users, and the main obstacle in getting them to change is the integrated environment matlab offers. If we could get Python(x,y) for our Mac's, that would be great! ~ Roger On Tue, Oct 20, 2009 at 12:15 AM, Michael Aye wrote: > This is an awesome distribution, but please, please, bring it out for > Mac OS X, okay? :) > > Best regards, > Michael > > On Oct 18, 8:11?pm, Pierre Raybaut wrote: >> Hi all, >> >> I'm quite pleased (and relieved) to announce that Python(x,y) version >> 2.6.3.0 has been released. It is the first release based on Python 2.6 >> -- note that Python(x,y) version number will now follow the included >> Python version (Python(x,y) vX.Y.Z.N will be based on Python vX.Y.Z). >> >> Python(x,y) is a free Python distribution providing a ready-to-use >> scientific development software for numerical computations, data >> analysis and data visualization based on Python programming language, Qt >> graphical user interfaces (and development framework), Eclipse >> integrated development environment and Spyder interactive development >> environment. Its purpose is to help scientific programmers used to >> interpreted languages (such as MATLAB or IDL) or compiled languages >> (C/C++ or Fortran) to switch to Python. >> >> It is now available for Windows XP/Vista/7 (as well as for Ubuntu >> through the pythonxy-linux project -- note that included software may >> differs from the Windows version):http://www.pythonxy.com >> >> Major changes since v2.1.17: >> ? ? * Python 2.6.3 >> ? ? * Spyder 1.0.0 -- the Scientific PYthon Development EnviRonment, a >> powerful MATLAB-like development environment introducing exclusive >> features in the scientific Python community >> (http://packages.python.org/spyder/) >> ? ? * MinGW 4.4.0 -- including gcc 4.4.0 and gfortran >> ? ? * Pydev 1.5.0 -- now including the powerful code analysis features >> of Pydev Extensions (formerly available as a commercial extension to the >> free Pydev plugin) >> ? ? * Enthought Tool Suite 3.3.0 >> ? ? * PyQt 4.5.4 and PyQwt 5.2.0 >> ? ? * VTK 5.4.2 >> ? ? * ITK 3.16 -- Built for Python 2.6 thanks to the help of Charl >> Botha, DeVIDE (Delft Visualisation and Image processing Development >> Environment) >> >> Complete release notes:http://www.pythonxy.com/download.php >> >> - Pierre >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Tue Oct 20 06:52:15 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 20 Oct 2009 06:52:15 -0400 Subject: [SciPy-User] scipy.stats.fit inquiry In-Reply-To: References: Message-ID: <1cd32cbb0910200352k14dbdeb2nedad5a5c37b1bf90@mail.gmail.com> On Mon, Oct 19, 2009 at 11:53 PM, Anne Archibald wrote: > 2009/10/19 Leon Adams : > >> I am using scipy.stats module to perform some distribution fitting. What I >> cannot seem to get a handle on is how to compare the quality of fit >> achieved. At this stage the docs does not seem to be quite as useful... As >> an example, I fit my data using >> >> >> fitExp = st.expon.fit(data) >> >> which returns an array [ 0.99999999? 1.33310547] >> >> How do we access the resulting maximized likelihood, mean square errors ... >> Also, how would we go about calculating KS tests for the fitted parameters >> ?? Mainly I am interesting in how good is this fit, and what diagnostics we >> have available. > > I'm not sure what tools we have in scipy, but there's always the > everything-looks-like-a-nail approach: fit for the parameters, then > use the fitted distribution to generate many data sets and see how > many of them are a better fit than yours. > > We do have a K-S test, which would serve as a reasonable way to answer > "how well does this data fit this distribution". The p value you get > will be wrong if you obtained the distribution by fitting, but the K-S > value will still be a reasonable measure of quality-of-fit (which you > can compare to the quality of models fit to generated data sets). The > scatter in model parameters obtained by fitting generated data sets > will give you an estimate of the uncertainties on the fitted > parameters. > > For smarter approaches, for example Cash statistics, I'm not sure > whether scipy has anything more spohisticated, but at least scipy's > distributions will give you PDFs you can take negative logs of. > > Anne > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > For other fit tests, scipy.stats also has chisquare, I have written a power_discrepancy test, which both require binning. scipy.stats also has a probability plot, statsmodels a qqplot for visual inspections, scipy.stats has anderson darling (``anderson``) for exponential, which I never verified whether the results are correct There are a few examples for KS test in the scipy.stats tutorial and I'm using it heavily in the scipy.stats.distributions tests. I never heard about the Cash statistic. I also have an example http://code.google.com/p/joepython/source/browse/trunk/joepython/scipystats/enhance/try_VaR.py Where I tried to fit a dataset to all distributions that are available in scipy.stats.distributions. The main remaining problem is that, in most cases, we wouldn't want to estimate the loc, if the distribution has a finite boundary in the support. Josef From kmichael.aye at googlemail.com Tue Oct 20 08:19:38 2009 From: kmichael.aye at googlemail.com (Michael Aye) Date: Tue, 20 Oct 2009 05:19:38 -0700 (PDT) Subject: [SciPy-User] Enthought's Traits and Python 2.6 In-Reply-To: <3d375d730910190923q7514b7f8xc0797ea84589b7a@mail.gmail.com> References: <3d375d730910190923q7514b7f8xc0797ea84589b7a@mail.gmail.com> Message-ID: On Oct 19, 6:23?pm, Robert Kern wrote: > On Mon, Oct 19, 2009 at 11:17, Michael Aye wrote: > > Hi all, > > the current downloadable Enthought Distribution is running Python 2.5. > > Is there a way to use the Enthought traits with Python 2.6, as i need > > 2.6 for its multiprocess library. > > You will want to ask questions about the Enthought Tool Suite on > enthought-dev, not scipy-user. > > ?https://mail.enthought.com/mailman/listinfo/enthought-dev > Ah, I see, i thought that one is only for developers ('-dev'). Nice to know, that user-questions can be posed there, too! > There are a couple of things to note. The first is that the version of > Python used as the basis for the Enthought Python Distribution is > unrelated to the versions of Python supported by the Enthought Tool > Suite. You should be able to build Traits from source with Python 2.6 > just fine. > > However, it is also worth noting that you do not need Python 2.6 in > order to use multiprocessing. It is available as a third party package > backported to Python 2.5. In fact, it is bundled with the Enthought > Python Distribution. > > ?http://pypi.python.org/pypi/multiprocessing > I see, also very good to know! Thanks very much! And it was a good webinar, thanks! ;) BR, Michael From josef.pktd at gmail.com Tue Oct 20 10:13:58 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 20 Oct 2009 10:13:58 -0400 Subject: [SciPy-User] using scipy.optimize.fmin_slsqp and setting bounds=(None, None) In-Reply-To: <961987.41200.qm@web45908.mail.sp1.yahoo.com> References: <961987.41200.qm@web45908.mail.sp1.yahoo.com> Message-ID: <1cd32cbb0910200713m60c8761cn3063555ea7710374@mail.gmail.com> On Mon, Oct 19, 2009 at 5:46 PM, Peter Halverson wrote: > I'm not sure if this is user error or an actual bug. When I attempt to set > my bounds in fmin_slsqp the option bounds =[(-10,10),(0,None)] is not > recognized Scipy crashes with > > IndexError??????????????????????????????? Traceback (most recent call last) > > C:\Documents and Settings\All Users\Documents\Python\ in > () > > C:\Python25\lib\site-packages\scipy\optimize\slsqp.pyc in fmin_slsqp(func, > x0, eqcons, f_eqcons, ieqcons, f_ieqcons, bounds, fprime, fprime_eqcons, > fprime_ieqcons, args, iter, acc, iprint, full_output, epsilon) > ??? 244???????????? if bounds[i][0] > bounds[i][1]: > ??? 245???????????????? raise ValueError, \ > --> 246???????????????? 'SLSQP Error: lb > ub in bounds[' + str(i) +']? ' + > str(bounds[4]) > ??? 247 > ??? 248???? xl = array( [ b[0] for b in bounds ] ) > > IndexError: list index out of range > > My code: > > import numpy as np > import scipy as sp > from scipy.optimize import fmin_slsqp as fmincon > > #The purpose of this script is > # minimize x0+x1^2 > # where a and b are constants > # and 0 # and x1-x2>0 > > def fitfun(x): > ??? t=x[0]+x[1]**2 > ??? return t > > def confun(x): > ??? return (x[0]-x[1]) > > bnds =[(-10,10),(0,None)] > guess=[0.5,0.5] > > fmincon(fitfun,guess,ieqcons=[confun],bounds=bnds) > > If this is a bug I will report it to the appropriate places. If its not a > bug please help me sort it out. > I think bounds are just assumed to be finite (``inf`` doesn't work either) If you set the bounds to a number that is unlikely to be binding, then it works. For example, with bnds =[(-10,10),(0,1000)] your example and several variations of it with different constraints binding, that I tried, work without problems. Josef > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From peridot.faceted at gmail.com Tue Oct 20 11:13:55 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 20 Oct 2009 11:13:55 -0400 Subject: [SciPy-User] scipy.stats.fit inquiry In-Reply-To: <1cd32cbb0910200352k14dbdeb2nedad5a5c37b1bf90@mail.gmail.com> References: <1cd32cbb0910200352k14dbdeb2nedad5a5c37b1bf90@mail.gmail.com> Message-ID: 2009/10/20 : > I never heard about the Cash statistic. It's a clever trick for estimating uncertainties on fitted parameters; you do some magic with the likelihood ratio and you get statistic that behaves like chi-squared, apart from being exactly zero at your best-fit value. So it's no use for esstimating quality-of-fit, but you can use it to get error regions just the way you would if you'd had Gaussian statistics and a chi-squared fit. (Cash 1979, "Parameter estimation in astronomy through application of the likelihood ratio") Incidentally, I have some code implementing the Kuiper test, a modified K-S test that is sensitive to different aspects of the shape of the distribution, and (more importantly for me) is invariant on shifting a distribution or sample modulo 1. I haven't submitted it for inclusion because the interface I used is a little different from that used by scipy's K-S test, but if there's interest I'd be happy to contribute it. Anne From alemi at sissa.it Tue Oct 20 11:33:16 2009 From: alemi at sissa.it (Alireza Alemi-Neissi) Date: Tue, 20 Oct 2009 17:33:16 +0200 (CEST) Subject: [SciPy-User] loading mat file in scipy Message-ID: <3312.140.105.40.24.1256052796.squirrel@webmail.sissa.it> Dear All, I do not know if it is the right place to ask this question. If not, please let me know where I can post it. This the problem: I have recently tried to use scipy.io.loadmat() to load a mat file to python. The mat file is a complicated nested structure in matlab. It turned out that it takes a long time (around 5 min) to load this data the size of which is around 120MB. On the other hand it take only a couple of seconds to load it in Matlab. What is wrong with loadmat in scipy? Thanks, Alireza ---------------------------------------------------------------- SISSA Webmail https://webmail.sissa.it/ Powered by SquirrelMail http://www.squirrelmail.org/ From robert.kern at gmail.com Tue Oct 20 11:35:40 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 20 Oct 2009 10:35:40 -0500 Subject: [SciPy-User] loading mat file in scipy In-Reply-To: <3312.140.105.40.24.1256052796.squirrel@webmail.sissa.it> References: <3312.140.105.40.24.1256052796.squirrel@webmail.sissa.it> Message-ID: <3d375d730910200835p65aa28b3i92c72759dbed5f20@mail.gmail.com> On Tue, Oct 20, 2009 at 10:33, Alireza Alemi-Neissi wrote: > Dear All, > > I do not know if it is the right place to ask this question. If not, > please let me know where I can post it. > This the problem: > > I have recently tried to use scipy.io.loadmat() to load a mat file to > python. > The mat file is a complicated nested structure in matlab. It turned out > that it takes a long time (around 5 min) to load this data the size of > which is around 120MB. > On the other hand it take only a couple of seconds to load it in Matlab. > What is wrong with loadmat in scipy? Without seeing the file, I have no idea. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robince at gmail.com Tue Oct 20 12:25:07 2009 From: robince at gmail.com (Robin) Date: Tue, 20 Oct 2009 17:25:07 +0100 Subject: [SciPy-User] loading mat file in scipy In-Reply-To: <3d375d730910200835p65aa28b3i92c72759dbed5f20@mail.gmail.com> References: <3312.140.105.40.24.1256052796.squirrel@webmail.sissa.it> <3d375d730910200835p65aa28b3i92c72759dbed5f20@mail.gmail.com> Message-ID: <2d5132a50910200925u279cd97av1855069e12e72112@mail.gmail.com> I think these kind of heavily nested structures are just very slow to load with scipy.io.loadmat. I noticed this previously with a similarly structured file (but this is only 1MB so much more convenient for testing) which I provided. http://www.robince.net/robince/structs_cells.mat http://mail.scipy.org/pipermail/scipy-user/2009-April/020860.html Make sure you are using a recent version of scipy. I think there was some performance fixes that improved it - with current scipy SVN on a macbook pro structs_cells.mat takes about 28s to load (structs_as_record doesn't seem to make a difference). This is already some improvement (40s in April, 4 minutes prior to that). On Matlab it takes about 1.4s. Cheers Robin On Tue, Oct 20, 2009 at 4:35 PM, Robert Kern wrote: > On Tue, Oct 20, 2009 at 10:33, Alireza Alemi-Neissi wrote: >> Dear All, >> >> I do not know if it is the right place to ask this question. If not, >> please let me know where I can post it. >> This the problem: >> >> I have recently tried to use scipy.io.loadmat() to load a mat file to >> python. >> The mat file is a complicated nested structure in matlab. It turned out >> that it takes a long time (around 5 min) to load this data the size of >> which is around 120MB. >> On the other hand it take only a couple of seconds to load it in Matlab. >> What is wrong with loadmat in scipy? > > Without seeing the file, I have no idea. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ?-- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Tue Oct 20 12:50:37 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 20 Oct 2009 12:50:37 -0400 Subject: [SciPy-User] scipy.stats.fit inquiry In-Reply-To: References: <1cd32cbb0910200352k14dbdeb2nedad5a5c37b1bf90@mail.gmail.com> Message-ID: <1cd32cbb0910200950l7381acd0y1c788794b2e80a73@mail.gmail.com> On Tue, Oct 20, 2009 at 11:13 AM, Anne Archibald wrote: > 2009/10/20 ?: > >> I never heard about the Cash statistic. > > It's a clever trick for estimating uncertainties on fitted parameters; > you do some magic with the likelihood ratio and you get statistic that > behaves like chi-squared, apart from being exactly zero at your > best-fit value. So it's no use for esstimating quality-of-fit, but you > can use it to get error regions just the way you would if you'd had > Gaussian statistics and a chi-squared fit. (Cash 1979, "Parameter > estimation in astronomy through application of the likelihood ratio") > > Incidentally, I have some code implementing the Kuiper test, a > modified K-S test that is sensitive to different aspects of the shape > of the distribution, and (more importantly for me) is invariant on > shifting a distribution or sample modulo 1. I haven't submitted it for > inclusion because the interface I used is a little different from that > used by scipy's K-S test, but if there's interest I'd be happy to > contribute it. More tests in scipy.stats is always good (at least as long as I don't have to chase down the references to write the tests for the tests.) How do you get the pvalue or critical values for Kuiper, since the distribution of the test statistic is not a standard distribution? (From what I understand from Sherpa, is that Cash is used as objective function for the maximum likelihood estimation of a Poisson process.) Looking for Kuiper, I found a nice overview of a large list of goodness of fit tests, used in natural sciences. http://www.ge.infn.it/statisticaltoolkit/gof/deployment/userdoc/statistics/applicability.html with article in http://ieeexplore.ieee.org/xpls/abs_all.jsp?isnumber=29603&arnumber=1344284&count=103&index=22 And apropos circular statistic, which I know nothing about: stats.distribution.vonmises is the only distribution that has bounded (or circular) support but doesn't define the bounds (a, b). This screws up numerical integration, e.g. for the moment calculation. Is vonmises actually used on circular support? To enable integration, we need to define a and b (e.g [-pi,pi] as in numpy random) or introduce new bounds specifically for integration in this case. Josef > > Anne > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From silva at lma.cnrs-mrs.fr Tue Oct 20 13:10:00 2009 From: silva at lma.cnrs-mrs.fr (Fabricio Silva) Date: Tue, 20 Oct 2009 19:10:00 +0200 Subject: [SciPy-User] loading mat file in scipy In-Reply-To: <2d5132a50910200925u279cd97av1855069e12e72112@mail.gmail.com> References: <3312.140.105.40.24.1256052796.squirrel@webmail.sissa.it> <3d375d730910200835p65aa28b3i92c72759dbed5f20@mail.gmail.com> <2d5132a50910200925u279cd97av1855069e12e72112@mail.gmail.com> Message-ID: <1256058600.2900.3.camel@PCTerrusse> Le mardi 20 octobre 2009 ? 17:25 +0100, Robin a ?crit : > I think these kind of heavily nested structures are just very slow to > load with scipy.io.loadmat. I deal with 250Mb .mat files generated by ControlDesk (Dspace?) and containing nested structures (max depth: 4 or 5). Loading it with loadmat is not a problem... -- Fabrice Silva Laboratory of Mechanics and Acoustics (CNRS, UPR 7051) From matthew.brett at gmail.com Tue Oct 20 13:24:31 2009 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 20 Oct 2009 12:24:31 -0500 Subject: [SciPy-User] loading mat file in scipy In-Reply-To: <3312.140.105.40.24.1256052796.squirrel@webmail.sissa.it> References: <3312.140.105.40.24.1256052796.squirrel@webmail.sissa.it> Message-ID: <1e2af89e0910201024q1276763aj54d33ca4cb5ee301@mail.gmail.com> Hi, > I have recently tried to use scipy.io.loadmat() to load a mat file to > python. > The mat file is a complicated nested structure in matlab. It turned out > that it takes a long time (around 5 min) to load this data the size of > which is around 120MB. > On the other hand it take only a couple of seconds to load it in Matlab. > What is wrong with loadmat in scipy? What version of scipy do you have? There was a performance bug I put in for 0.7 and removed for 0.7.1. If you are using 0.7.1, then maybe you can let me have a look at your file somehow - maybe by sharing a dropbox folder? Thanks a lot, Matthew From peridot.faceted at gmail.com Tue Oct 20 14:41:36 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 20 Oct 2009 14:41:36 -0400 Subject: [SciPy-User] scipy.stats.fit inquiry In-Reply-To: <1cd32cbb0910200950l7381acd0y1c788794b2e80a73@mail.gmail.com> References: <1cd32cbb0910200352k14dbdeb2nedad5a5c37b1bf90@mail.gmail.com> <1cd32cbb0910200950l7381acd0y1c788794b2e80a73@mail.gmail.com> Message-ID: 2009/10/20 : > On Tue, Oct 20, 2009 at 11:13 AM, Anne Archibald > wrote: >> >> Incidentally, I have some code implementing the Kuiper test, a >> modified K-S test that is sensitive to different aspects of the shape >> of the distribution, and (more importantly for me) is invariant on >> shifting a distribution or sample modulo 1. I haven't submitted it for >> inclusion because the interface I used is a little different from that >> used by scipy's K-S test, but if there's interest I'd be happy to >> contribute it. > > More tests in scipy.stats is always good (at least as long as I don't > have to chase down the references to write the tests for the tests.) > How do you get the pvalue or critical values for Kuiper, since the > distribution of the test statistic is not a standard distribution? There's a special function, like the one for the K-S test (but obviously different in details), which I implemented. I have tests for the Kuiper test too. Most of the code is at home and in any case needs some polishing, but I've attached kuiper.py just so you can see what's there. (There's also some extra stuff for dealing with different exposure times for different phases, which doesn't need to go in.) Some references are Paltani 2004, "Searching for periods in X-ray observations using Kuiper's test. Application to the ROSAT PSPC archive", and Kuiper 1962, "Tests concerning random points on a circle" (which I haven't read). I also have some code for the H test (essentially looking at the Fourier coefficients to find how many harmonics are worth including and what the significance is; de Jager et al. 1989 "A poweful test for weak periodic signals with unknown light curve shape in sparse data"). But circular statistics seem to be a bit obscure, so I'm not sure how much effort should go into putting this in scipy. > (From what I understand from Sherpa, is that Cash is used as > objective function for the maximum likelihood estimation of a > Poisson process.) In principle it's more general, but it's used, for example, in the X-ray spectral fitting program XSPEC for when you only have a handful of photons in each energy bin. > Looking for Kuiper, I found a nice overview of a large list of goodness > of fit tests, used in natural sciences. > > http://www.ge.infn.it/statisticaltoolkit/gof/deployment/userdoc/statistics/applicability.html > > with article in > http://ieeexplore.ieee.org/xpls/abs_all.jsp?isnumber=29603&arnumber=1344284&count=103&index=22 Hmm, I'll have to look into the Watson statistic, I haven't run into it before. > And apropos circular statistic, which I know nothing about: > > stats.distribution.vonmises is the only distribution that has bounded > (or circular) support but doesn't define the bounds (a, b). This > screws up numerical integration, e.g. for the moment calculation. > > Is vonmises actually used on circular support? > To enable integration, we need to define a and b (e.g [-pi,pi] as > in numpy random) or introduce new bounds specifically for > integration in this case. I use circular statistics quite a bit, since it's the appropriate toolkit for working with X-ray pulsar pulse profiles. In particular the von Mises distribution is handy for producing simulated pulse profiles (there's also a sense in which it is the circular analog of the normal distribution; it's maximum entropy for a fixed first moment IIRC). Unfortunately the generic stats interface is quite clumsy for this particular case. If I recall correctly as of 0.6.0 the interface was kind of miserable to use, since the CDF had a discontinuity at -pi and pi, and using loc moved the discontinuity along with the function. (I think there was also a severe bug for large values of the shape parameter.) I'm not sure what the interface is like in more recent versions, since 0.6.0 is what's on the computers here at work, but I think it's better. It's worth defining the boundaries, but I don't think you'll get useful moment calculations out of it, since circular moments are defined differently from linear moments: rather than int(x**n*pdf(x)) they're int(exp(2,j*pi*n*x)*pdf(x)). There are explicit formulas for these for the von Mises distribution, though, and they're useful because they are essentially the Fourier coefficients of the pdf. For context, I've done some work with representing circular probability distributions (= pulse profiles) in terms of their moments (= Fourier series), and in particular using a kernel density estimator with either a sinc kernel (= truncation) or a von Mises kernel (= scaling coefficients by von Mises moments). The purpose was to get not-too-biased estimates of the modulation amplitudes of X-ray pulsations; the paper has been kind of sidetracked by other work. Anne -------------- next part -------------- A non-text attachment was scrubbed... Name: kuiper.py Type: text/x-python Size: 6018 bytes Desc: not available URL: From bruce at clearscienceinc.com Tue Oct 20 16:23:44 2009 From: bruce at clearscienceinc.com (Bruce Ford) Date: Tue, 20 Oct 2009 16:23:44 -0400 Subject: [SciPy-User] Matrix Element Comparison Message-ID: There must be an elegant way to do this, but I've been staring at Numpy functions to no avail. I want to great a matrix that counts the number of times a condition is met for each grid point. print grid.shape # print (31,18) count_grid = np.zeros_like(grid) #new grid for counting exceed = 1 I want to do an element-by-element comparison and when a gridpoint value is > 1, add one to count_grid at that same grid point. I'm looking for an elegant way to do something like this in an element-wise fashion... if grid>exceed: count_grid = count_grid+1 Any points in the right direction would be appreciated! Bruce --------------------------------------- Bruce W. Ford Clear Science, Inc. bruce at clearscienceinc.com From robert.kern at gmail.com Tue Oct 20 16:27:21 2009 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 20 Oct 2009 15:27:21 -0500 Subject: [SciPy-User] Matrix Element Comparison In-Reply-To: References: Message-ID: <3d375d730910201327q2b3ef426v48fb9deac5dd83c9@mail.gmail.com> On Tue, Oct 20, 2009 at 15:23, Bruce Ford wrote: > There must be an elegant way to do this, but I've been staring at > Numpy functions to no avail. > > I want to great a matrix that counts the number of times a condition > is met for each grid point. > > print grid.shape ? # print (31,18) > count_grid = np.zeros_like(grid) ?#new grid for counting > exceed = 1 > > I want to do an element-by-element comparison and when a gridpoint > value is > 1, add one to count_grid at that same grid point. > > I'm looking for an elegant way to do something like this in an > element-wise fashion... > > if grid>exceed: > ? ?count_grid = count_grid+1 > > Any points in the right direction would be appreciated! count_grid += (grid > exceed) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bruce at clearscienceinc.com Tue Oct 20 17:19:13 2009 From: bruce at clearscienceinc.com (Bruce Ford) Date: Tue, 20 Oct 2009 17:19:13 -0400 Subject: [SciPy-User] Matrix Element Comparison In-Reply-To: <3d375d730910201327q2b3ef426v48fb9deac5dd83c9@mail.gmail.com> References: <3d375d730910201327q2b3ef426v48fb9deac5dd83c9@mail.gmail.com> Message-ID: That is elegant. I'm not finding much in the docs about this kind of operator. Thanks much! --------------------------------------- Bruce W. Ford Clear Science, Inc. bruce at clearscienceinc.com bruce.w.ford.ctr at navy.smil.mil http://www.ClearScienceInc.com Phone/Fax: 904-379-9704 8241 Parkridge Circle N. Jacksonville, FL 32211 Skype: bruce.w.ford Google Talk: fordbw at gmail.com On Tue, Oct 20, 2009 at 4:27 PM, Robert Kern wrote: > On Tue, Oct 20, 2009 at 15:23, Bruce Ford wrote: >> There must be an elegant way to do this, but I've been staring at >> Numpy functions to no avail. >> >> I want to great a matrix that counts the number of times a condition >> is met for each grid point. >> >> print grid.shape ? # print (31,18) >> count_grid = np.zeros_like(grid) ?#new grid for counting >> exceed = 1 >> >> I want to do an element-by-element comparison and when a gridpoint >> value is > 1, add one to count_grid at that same grid point. >> >> I'm looking for an elegant way to do something like this in an >> element-wise fashion... >> >> if grid>exceed: >> ? ?count_grid = count_grid+1 >> >> Any points in the right direction would be appreciated! > > count_grid += (grid > exceed) > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ?-- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From all6junk at gmail.com Tue Oct 20 19:44:44 2009 From: all6junk at gmail.com (all6junk at gmail.com) Date: Wed, 21 Oct 2009 01:44:44 +0200 Subject: [SciPy-User] scikits.timeseries: Using dtype in tsfromtxt Message-ID: Hi all, I am new to using numpy and scikits.timeseries. I am facing small difficulty in loading time series data from a text file which has data like [test_data.csv] 20051017, 380.00, 386.30 20051018, 386.85, 388.55 20051019, 371.60, 373.55 20051020, 365.30, 371.85 20051021, 360.35, 368.50 20051024, 368.50, 379.90 I use the following code to read in the timeseries data [] import numpy.ma as ma import datetime import scikits.timeseries as ts import matplotlib.pyplot as plt import scikits.timeseries.lib.plotlib as tpl def dconv(s): s = str(s) yyyy = int(s[:4]) mm = int(s[4:6]) dd = int(s[6:8]) return ts.Date('B', year=yyyy, month=mm, day=dd) def loadscrip(scripfile): dt = np.dtype([('date', 'a8'), ('open', 'f4'), ('high', 'f4')]) ser = ts.tsfromtxt(scripfile, dtype=dt, delimiter=',', datecols=(0), dateconverter=dconv) return ser if __name__ == '__main__': scripfile = 'test_data.csv' scripdata = loadscrip(scripfile) I get the following error, [] Traceback (most recent call last): File "tsfromtxt.py", line 24, in scripdata = loadscrip(scripfile) File "tsfromtxt.py", line 19, in loadscrip delimiter=',', datecols=(0), dateconverter=dconv) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scikits.timeseries-0.91.2-py2.6-macosx-10.3-i386.egg/scikits/timeseries/extras.py", line 432, in tsfromtxt mrec = genfromtxt(fname, **kwargs) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy-1.3.0-py2.6-macosx-10.3-i386.egg/numpy/lib/io.py", line 920, in genfromtxt converters[i].update(conv, default=None, IndexError: list index out of range I know I am doing some silly error. But probably someone can save me from pulling my hair out. Cheers, Chaitanya From pgmdevlist at gmail.com Tue Oct 20 21:37:11 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 20 Oct 2009 21:37:11 -0400 Subject: [SciPy-User] scikits.timeseries: Using dtype in tsfromtxt In-Reply-To: References: Message-ID: On Oct 20, 2009, at 7:44 PM, all6junk at gmail.com wrote: Hi Chaitanya, > > def loadscrip(scripfile): > dt = np.dtype([('date', 'a8'), ('open', 'f4'), ('high', 'f4')]) > ser = ts.tsfromtxt(scripfile, dtype=dt, > delimiter=',', datecols=(0), dateconverter=dconv) > return ser That's part of your problem here: you should use dtype=[('open', 'f4'), ('high', 'f4')] instead of dtype=[('date', 'a8'), ('open', 'f4'), ('high', 'f4')]. The reason is that with your dtype, you'd ask for twice the date information. Unfortunately, you also uncovered a bug (actually, a couple...), related to the fact that you want "f4" instead of "f8". For the time being, if you can, just use dtype=[('open', float), ('high', 'float)]. If you can't, then first use dtype=[('open', float), ('high', 'float)], then create an empty series with dtype= [('open', 'f4'), ('high', 'f4')] and fill it... Sorry for the inconvenience and not being able to give you a better solution right now. I should be able to work on that in the next couple of days and keep you posted. I need to warn you that the fixes may not work w/ numpy 1.3.0. We'll see. Sorry again P. From schlesin at cshl.edu Wed Oct 21 00:02:53 2009 From: schlesin at cshl.edu (Felix Schlesinger) Date: Wed, 21 Oct 2009 04:02:53 +0000 (UTC) Subject: [SciPy-User] faster nonzero indices Message-ID: Is there a faster way to do: foo = scipy.nonzero(bar > 1)[0] where bar is a 1d ndarray of type 'int32' i.e. to get all indices of an array for which a condition is true. Since in this case the arrays are quite large and the condition is only true for few items creating a long boolean array and then passing over it again to find non zero entries seems inefficient. Felix From kmichael.aye at googlemail.com Wed Oct 21 03:12:00 2009 From: kmichael.aye at googlemail.com (Michael Aye) Date: Wed, 21 Oct 2009 00:12:00 -0700 (PDT) Subject: [SciPy-User] Image analysis: Counting masked areas Message-ID: Dear all, is it possible to count the areas in an image, that have been masked by some condition? I need to count all areas in an image darker than a certain value. Masking is no problem, but how do I count independent areas? I was thinking just to look at the masked array, but then a pixel-by-pixel check if the neighbour pixel is masked as well to find *independent* masked areas sees numerically very costly. Isn't there a better way? Many thanks for any help! Best regards, Michael From david_baddeley at yahoo.com.au Wed Oct 21 03:52:35 2009 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Wed, 21 Oct 2009 00:52:35 -0700 (PDT) Subject: [SciPy-User] Image analysis: Counting masked areas In-Reply-To: References: Message-ID: <834039.99735.qm@web33004.mail.mud.yahoo.com> scipy.ndimage.label (and also potentially ndimage.measurements) might be what you're looking for. best wishes, David ----- Original Message ---- From: Michael Aye To: scipy-user at scipy.org Sent: Wed, October 21, 2009 8:12:00 PM Subject: [SciPy-User] Image analysis: Counting masked areas Dear all, is it possible to count the areas in an image, that have been masked by some condition? I need to count all areas in an image darker than a certain value. Masking is no problem, but how do I count independent areas? I was thinking just to look at the masked array, but then a pixel-by-pixel check if the neighbour pixel is masked as well to find *independent* masked areas sees numerically very costly. Isn't there a better way? Many thanks for any help! Best regards, Michael _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From faltet at pytables.org Wed Oct 21 03:55:22 2009 From: faltet at pytables.org (Francesc Alted) Date: Wed, 21 Oct 2009 09:55:22 +0200 Subject: [SciPy-User] faster nonzero indices In-Reply-To: References: Message-ID: <200910210955.22618.faltet@pytables.org> A Wednesday 21 October 2009 06:02:53 Felix Schlesinger escrigu?: > Is there a faster way to do: > > foo = scipy.nonzero(bar > 1)[0] > > where bar is a 1d ndarray of type 'int32' > i.e. to get all indices of an array for which a condition is true. > > Since in this case the arrays are quite large and the condition is only > true for few items creating a long boolean array and then passing over it > again to find non zero entries seems inefficient. If the number of elements that evaluates the condition to true is effectively small, and you can afford to have a precomputed array with indexes in memory (typically, an `arange()`) you can try with numexpr [1]: In [1]: import numpy as np In [2]: import numexpr as ne In [3]: bar = np.random.randint(0,1e6,1e6).astype('int32') In [4]: timeit np.where(bar > 999000)[0] 100 loops, best of 3: 12.1 ms per loop In [5]: idx = np.arange(len(bar)) In [6]: timeit idx[ne.evaluate('where(bar > 999000, 1, 0)').astype('bool')] 100 loops, best of 3: 7.68 ms per loop which is more than 1.5x times faster than the numpy counterpart. Even if you have to compute idx each time, the above approach is faster than numpy: In [7]: timeit np.arange(len(bar))[ne.evaluate('where(bar > 999000, 1, 0)').astype('bool')] 100 loops, best of 3: 11 ms per loop although in that case, just by a meager 10%. [1] http://code.google.com/p/numexpr/ -- Francesc Alted From sahar at cmt.co.il Wed Oct 21 05:42:31 2009 From: sahar at cmt.co.il (Sahar) Date: Wed, 21 Oct 2009 11:42:31 +0200 Subject: [SciPy-User] open dicom images Message-ID: Hi all, Is there any way to read dicom iamges into a scipy array? Somthing like Matlab's "d = dicomread('tmp_im.dcm');" Thanks, Sahar ******************************************************************************************************* This e-mail message may contain confidential,and privileged information or data that constitute proprietary information of CMT Medical Ltd. Any review or distribution by others is strictly prohibited. If you are not the intended recipient you are hereby notified that any use of this information or data by any other person is absolutely prohibited. If you are not the intended recipient, please delete all copies. Thank You. http://www.cmt.co.il ******************************************************************************************************** ************************************************************************************ This footnote confirms that this email message has been scanned by PineApp Mail-SeCure for the presence of malicious code, vandals & computer viruses. ************************************************************************************ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav+sp at iki.fi Wed Oct 21 06:06:54 2009 From: pav+sp at iki.fi (Pauli Virtanen) Date: Wed, 21 Oct 2009 10:06:54 +0000 (UTC) Subject: [SciPy-User] open dicom images References: Message-ID: Wed, 21 Oct 2009 11:42:31 +0200, Sahar wrote: > Is there any way to read dicom iamges into a scipy array? Somthing like > Matlab's "d = dicomread('tmp_im.dcm');" If you type "dicom python" into Google, you get http://pypi.python.org/pypi/pydicom/0.9.3 It seems to provide images as Numpy arrays. From josef.pktd at gmail.com Wed Oct 21 10:55:52 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 21 Oct 2009 10:55:52 -0400 Subject: [SciPy-User] scipy.stats.fit inquiry In-Reply-To: References: <1cd32cbb0910200352k14dbdeb2nedad5a5c37b1bf90@mail.gmail.com> <1cd32cbb0910200950l7381acd0y1c788794b2e80a73@mail.gmail.com> Message-ID: <1cd32cbb0910210755i67c07fc6j5f17a317bbae608@mail.gmail.com> On Tue, Oct 20, 2009 at 2:41 PM, Anne Archibald wrote: > 2009/10/20 ?: >> On Tue, Oct 20, 2009 at 11:13 AM, Anne Archibald >> wrote: >>> >>> Incidentally, I have some code implementing the Kuiper test, a >>> modified K-S test that is sensitive to different aspects of the shape >>> of the distribution, and (more importantly for me) is invariant on >>> shifting a distribution or sample modulo 1. I haven't submitted it for >>> inclusion because the interface I used is a little different from that >>> used by scipy's K-S test, but if there's interest I'd be happy to >>> contribute it. >> >> More tests in scipy.stats is always good (at least as long as I don't >> have to chase down the references to write the tests for the tests.) >> How do you get the pvalue or critical values for Kuiper, since the >> distribution of the test statistic is not a standard distribution? > > There's a special function, like the one for the K-S test (but > obviously different in details), which I implemented. I have tests for > the Kuiper test too. Most of the code is at home and in any case needs > some polishing, but I've attached kuiper.py just so you can see what's > there. (There's also some extra stuff for dealing with different > exposure times for different phases, which doesn't need to go in.) I just did a quick Monte Carlo to see how well sized the test is. It looks good for the normal distribution, it also has some power testing for the t distribution with fixed df, loc and scale. DGP standard normal nsim = 10000 nobs = 500, [[ 0.0094 0.0496 0.0958] normal [ 0.0529 0.1759 0.287 ]] t, df=10 nobs = 100 [[ 0.0072 0.0425 0.0831] [ 0.0132 0.0615 0.1191]] nobs = 100 [[ 0.0081 0.0417 0.0841] [ 0.0315 0.1233 0.2128]] t, df=5 When the parameters of the t distribution are estimated, then the test has no power. I don't know if adjusting the critical values would help at least for comparing similar distributions like norm and t. (same problem with kstest) ---------- if __name__ == '__main__': from scipy import stats tcdf = lambda x: stats.t.cdf(x, 5) mcres = [] nsim = 100 nobs = 500 #500 est = 0 for i in range(nsim): rvsn = 1.0* np.random.normal(size=nobs) d,p = kuiper(rvsn, stats.norm.cdf) if est: targs = stats.t.fit(rvsn) tcdf = lambda x: stats.t.cdf(x, *targs) #print targs dt,pt = kuiper(rvsn, tcdf) mcres.append([d,p, dt, pt]) #print d, p mcres = np.array(mcres) print (mcres[:,(1, 3)][:,:,None] > Some references are Paltani 2004, "Searching for periods in X-ray > observations using Kuiper's test. Application to the ROSAT PSPC > archive", and Kuiper 1962, "Tests concerning random points on a > circle" (which I haven't read). > > I also have some code for the H test (essentially looking at the > Fourier coefficients to find how many harmonics are worth including > and what the significance is; de Jager et al. 1989 "A poweful test for > weak periodic signals with unknown light curve shape in sparse data"). > But circular statistics seem to be a bit obscure, so I'm not sure how > much effort should go into putting this in scipy. For sure they are obscure to me, but there are a few circular descriptive statistics in stats.morestats, and I saw a matlab toolbox on the file exchange. I figured out by now that there are some pretty different statistics used in various fields. I guess, it's all up to you. > >> (From what I understand from Sherpa, is that Cash is used as >> objective function for the maximum likelihood estimation of a >> Poisson process.) > > In principle it's more general, but it's used, for example, in the > X-ray spectral fitting program XSPEC for when you only have a handful > of photons in each energy bin. > >> Looking for Kuiper, I found a nice overview of a large list of goodness >> of fit tests, used in natural sciences. >> >> http://www.ge.infn.it/statisticaltoolkit/gof/deployment/userdoc/statistics/applicability.html >> >> with article in >> http://ieeexplore.ieee.org/xpls/abs_all.jsp?isnumber=29603&arnumber=1344284&count=103&index=22 > > Hmm, I'll have to look into the Watson statistic, I haven't run into it before. > >> And apropos circular statistic, which I know nothing about: >> >> stats.distribution.vonmises is the only distribution that has bounded >> (or circular) support but doesn't define the bounds (a, b). This >> screws up numerical integration, e.g. for the moment calculation. >> >> Is vonmises actually used on circular support? >> To enable integration, we need to define a and b (e.g [-pi,pi] as >> in numpy random) or introduce new bounds specifically for >> integration in this case. > > I use circular statistics quite a bit, since it's the appropriate > toolkit for working with X-ray pulsar pulse profiles. In particular > the von Mises distribution is handy for producing simulated pulse > profiles (there's also a sense in which it is the circular analog of > the normal distribution; it's maximum entropy for a fixed first moment > IIRC). Unfortunately the generic stats interface is quite clumsy for > this particular case. If I recall correctly as of 0.6.0 the interface > was kind of miserable to use, since the CDF had a discontinuity at -pi > and pi, and using loc moved the discontinuity along with the function. > (I think there was also a severe bug for large values of the shape > parameter.) I'm not sure what the interface is like in more recent > versions, since 0.6.0 is what's on the computers here at work, but I > think it's better. If you are still on 0.6.0, then you are missing all of my cleanup of stats.distributions, especially the problems with the generic methods. However, I didn't touch vonmises, since I didn't know how to interpret the support. In the current generic framework, it would be just another distribution with a finite support, where generic moments are just calculated for the interval. >From your description (below), I would think, that for circular distribution, we would need different generic functions that don't fit in the current distribution classes, integration on a circle (?) instead of integration on the real line. If I define boundaries, vonmises.a and vonmises.b, then I think, you would not be able to calculate pdf(x), cdf(x) and so on for x outside of the support [a,b]. I don't know whether it is possible to define a,b but not enforce them in _argcheck and only use them as integration bounds. I checked briefly, vonmises.moment and vonmises.stats work (integrating over ppf not pdf) generic moment calculation with pdf (_mom0_sc) and entropy fail. fit seems to work with the normal random variable, but I thought I got lots of "weird" fit results before. For the real line, this looks "strange" >>> stats.vonmises.pdf(np.linspace(0,3*np.pi,10), 2) array([ 0.51588541, 0.18978364, 0.02568442, 0.00944877, 0.02568442, 0.18978364, 0.51588541, 0.18978364, 0.02568442, 0.00944877]) >>> stats.vonmises.cdf(np.linspace(0,3*np.pi,10), 2) array([ 0.5 , 0.89890776, 0.98532805, 1. , 7.28318531, 7.28318531, 7.28318531, 7.28318531, 7.28318531, 13.56637061]) ppf, calculated generically, works and looks only a little bit strange. >>> stats.vonmises.ppf([0, 1e-10,1-1e-10,1],2) array([ -Inf, -3.14159264, 3.14159264, Inf]) > > It's worth defining the boundaries, but I don't think you'll get > useful moment calculations out of it, since circular moments are > defined differently from linear moments: rather than int(x**n*pdf(x)) > they're int(exp(2,j*pi*n*x)*pdf(x)). There are explicit formulas for > these for the von Mises distribution, though, and they're useful > because they are essentially the Fourier coefficients of the pdf. > > For context, I've done some work with representing circular > probability distributions (= pulse profiles) in terms of their moments > (= Fourier series), and in particular using a kernel density estimator > with either a sinc kernel (= truncation) or a von Mises kernel (= > scaling coefficients by von Mises moments). The purpose was to get > not-too-biased estimates of the modulation amplitudes of X-ray > pulsations; the paper has been kind of sidetracked by other work. Sounds like statistics for seasonal time series to me, except you might have a lot more regularity in the repeated pattern then in economics or climate research. Josef > > > Anne > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From kmichael.aye at googlemail.com Wed Oct 21 13:24:24 2009 From: kmichael.aye at googlemail.com (Michael Aye) Date: Wed, 21 Oct 2009 10:24:24 -0700 (PDT) Subject: [SciPy-User] Image analysis: Counting masked areas In-Reply-To: <834039.99735.qm@web33004.mail.mud.yahoo.com> References: <834039.99735.qm@web33004.mail.mud.yahoo.com> Message-ID: <59efbcb1-1843-4156-bb90-a4f5bc3a78b4@g31g2000vbr.googlegroups.com> Thank you, exactly what I needed. For what it's worth, in case there is another image analysis newbie like me, the IDL helpfile on image analysis explains that stuff pretty good. But after reading of course you should come back and implement it in scipy! :) BR, Michael On Oct 21, 9:52?am, David Baddeley wrote: > scipy.ndimage.label (and also potentially ndimage.measurements) might be what you're looking for. > > best wishes, > David > > > > ----- Original Message ---- > From: Michael Aye > To: scipy-u... at scipy.org > Sent: Wed, October 21, 2009 8:12:00 PM > Subject: [SciPy-User] Image analysis:Countingmaskedareas > > Dear all, > is it possible to count the areas in an image, that have beenmasked > by some condition? > I need to count all areas in an image darker than a certain value. > Masking is no problem, but how do I count independent areas? I was > thinking just to look at themaskedarray, but then a pixel-by-pixel > check if the neighbour pixel ismaskedas well to find *independent*maskedareas sees numerically very costly. > Isn't there a better way? > > Many thanks for any help! > > Best regards, > Michael > _______________________________________________ > SciPy-User mailing list > SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From helloworld777 at hotmail.com Wed Oct 21 16:48:19 2009 From: helloworld777 at hotmail.com (hello world) Date: Wed, 21 Oct 2009 15:48:19 -0500 Subject: [SciPy-User] Newbie Question :: Natural Cubic Spline Message-ID: Hello: I am new to Python and the SciPy libraries and I was wondering if someone could help me with the following. Given the following x and y : values x_list = [33, 56, 56.00000000002, 147, 238, 329, 420, 511, 602, 693, 791] y_list = [0.99974, 0.99949, 0.99949, 0.99816,0.99631,0.99383,0.99043,0.98610,0.98078,0.97460,0.96704] I want to use a Natural Cubic Spline in order to determine the points at the following values: interp_x_points = [31,62,92,123,153,184,215,243,274,304,335,365,396,427,457,488,518,549,580,608,639,669,700] I have looked through the documentation and have had a tough time determining the correct syntax or which function to use. I was wondering what would be the SciPy syntax for the following: Also, does scipy have the ability to extrapolate if a give X value is outside a specified range? In this example, please notice interp_x_poiints has a value (31) which is outside of the x_list range. ============================ my_spline_object = NaturalCubicSpline(x_list, y_list) for interp_x_value in interp_x_points: interp_y_value = my_spline_object(interp_x_value) # return a floating value print interp_y_value Many Thanks _________________________________________________________________ Hotmail: Free, trusted and rich email service. http://clk.atdmt.com/GBL/go/171222984/direct/01/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alemi at sissa.it Wed Oct 21 18:03:30 2009 From: alemi at sissa.it (Alireza Alemi-Neissi) Date: Thu, 22 Oct 2009 00:03:30 +0200 (CEST) Subject: [SciPy-User] loading mat file in scipy In-Reply-To: <1e2af89e0910201024q1276763aj54d33ca4cb5ee301@mail.gmail.com> References: <3312.140.105.40.24.1256052796.squirrel@webmail.sissa.it> <1e2af89e0910201024q1276763aj54d33ca4cb5ee301@mail.gmail.com> Message-ID: <59753.93.37.135.52.1256162610.squirrel@webmail.sissa.it> Thanks for all the comments. I upgraded my python (2.6.3) and scipy (0.7.1) to the latest version. Nevertheless, the mat file which loads in 10s in Matlab, loads in 612s (~10min) in python. I also set the struct_as_array=True, but it did not make loading much faster ( 590s). I saved the mat file in version 5 of matlab instead of version 7, again did not change the whole thing. The file is too big (~120MB compressed). But I can tell you what is the structure of it. N is a <1 x 94 struct> each N.Stim is a <1 x 213 Cell> each N.stim has a stuct. Let me write an example of first N and first Stim: >>>N(1,1).Stim{1,1} NStimPerSet: 1 Obj: [1x1 struct] Npres: 7 times: {1x7 cell} SpikeTimes: {1x7 cell} PosInSeq: [4 9 14 20 6 4 9] TimeBins: [1x24 double] AvFireRate: [1x24 double] PlotColor: 'b' FR: [10.4167 0 0 0 10.4167 10.4167 0] AFR: 4.4643 std: 5.5679 ttest: [1x8 struct] Please note that for all 94 member of N and all 213 member of Stim, this fields are repeated. To share the data, I have to get the approval of my supervisor. I will let you know. I am surprise that a 250Mb .mat files generated by ControlDesk (Dspace?) can be loaded in python so fast. Is the nested structure is as compicated as mine? Disappointed with using loadmat, I am heading to use weave.inline to write a code in C, loading mat file using the tools in mat.h header (hope it does not get complicated!). Loading to see seems straightforward. But, I have not figured out yet how to convert the data loaded in C into python class (any clue?). Do you have other ideas to make loadmat faster? Thanks, Alireza > Hi, > >> I have recently tried to use scipy.io.loadmat() to load a mat file to >> python. >> The mat file is a complicated nested structure in matlab. It turned out >> that it takes a long time (around 5 min) to load this data the size of >> which is around 120MB. >> On the other hand it take only a couple of seconds to load it in Matlab. >> What is wrong with loadmat in scipy? > > What version of scipy do you have? There was a performance bug I put > in for 0.7 and removed for 0.7.1. > > If you are using 0.7.1, then maybe you can let me have a look at your > file somehow - maybe by sharing a dropbox folder? > > Thanks a lot, > > Matthew > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > ---------------------------------------------------------------- SISSA Webmail https://webmail.sissa.it/ Powered by SquirrelMail http://www.squirrelmail.org/ From peridot.faceted at gmail.com Wed Oct 21 18:15:21 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 21 Oct 2009 18:15:21 -0400 Subject: [SciPy-User] Kuiper test (was Re: scipy.stats.fit inquiry) Message-ID: 2009/10/21 : > When the parameters of the t distribution > are estimated, then the test has no power. > I don't know if adjusting the critical values > would help at least for comparing similar > distributions like norm and t. > (same problem with kstest) Yes, this test has its limitations. It's also not as sensitive as one might wish for distinguishing multimodal distributions from a constant. > To save on the lambda function, the cdf could take args given as > an argument to kuiper. Maybe the specification of the cdf argument > could be taken from kstest, but not (the generality of) the rvs argument > in kstest. I loathe interfaces that pass args; I find they make the interface more confusing while adding no functionality. But I realize that they're standard and not going away (what about the currying functionality in new pythons?), so I'll add it. > Your cdf_from_intervals looks nice for a univariate stepfunction distribution, > There are several pieces for this on the mailing list, but I never got around > to collecting them. This was just a quick hack because I needed it; I'm not sure it's worth including in scipy.stats, since an appropriately general version might be just as difficult to call as to reimplement. > Your interval functions, I haven't quite figured out. Is fold_intervals > supposed to convert a set of overlapping intervals to a non-overlapping > partitioning with associated weights? For example for merging > 2 histograms with non-identical bounds? That among other things. The context was that I had observations of an X-ray binary, consisting of several different segments with several different instruments. So I'd have one observation covering orbital phases 0.8 to 1.3 (i.e. 0.5 turns) with effective area 10 cm^2 (say), and another covering phases 0.2 to 2.1 (i.e. 1.9 turns) with effective area 12 cm^2 (say). So a constant flux from the source would produce a non-constant distribution of photons modulo 1; fold_intervals is designed to convert those two weighted spans to a collection of non-overlapping weighted intervals covering [0,1), for use in a CDF for the Kuiper test. The histogram stuff is there to allow plotting the results in a familiar form. > I ?haven't figured out the shift invariance. The idea is just that if you have, say, a collection X of samples from [0,1) that you are testing for uniformity, the Kuiper test returns the same result for X and for (X+0.3)%1. Since for pulsars there is often no natural start time, this is a sensible thing to ask from any test for uniformity. >> I also have some code for the H test (essentially looking at the >> Fourier coefficients to find how many harmonics are worth including >> and what the significance is; de Jager et al. 1989 "A poweful test for >> weak periodic signals with unknown light curve shape in sparse data"). >> But circular statistics seem to be a bit obscure, so I'm not sure how >> much effort should go into putting this in scipy. > > For sure they are obscure to me, but there are a few circular descriptive > statistics in stats.morestats, and I saw a matlab toolbox on the > file exchange. I figured out by now that there are some pretty different > statistics used in various fields. > I guess, it's all up to you. To be honest, these are a little obscure even among pulsar astronomers; here as elsewhere histograms seem to dominate. > >From your description (below), I would think, that for circular > distribution, we would need different generic functions that > don't fit in the current distribution classes, integration on a > circle (?) instead of integration on the real line. Basically, yes. Of course you can view integration on a circle as integration on a line parameterizing the circle, but there's no way around the fact that the first moment is a complex number which contains both position (argument) and concentration (magnitude) information. > If I define boundaries, vonmises.a and vonmises.b, then > I think, you would not be able to calculate pdf(x), cdf(x) > and so on for x outside of the support [a,b]. I don't know > whether it is possible to define a,b but not enforce them in > _argcheck and only use them as integration bounds. It's useful to be able to work with pdf and cdf outside the bounds, particularly since the effect of loc is to shift the bounds - so any code that takes loc as a free parameter would otherwise have to work hard to avoid going outside the bounds. > I checked briefly, vonmises.moment and vonmises.stats > work (integrating over ppf not pdf) > generic moment calculation with pdf ?(_mom0_sc) and > entropy fail. fit seems to work with the normal random > variable, but I thought I got lots of "weird" fit results > before. > > For the real line, this looks "strange" >>>> stats.vonmises.pdf(np.linspace(0,3*np.pi,10), 2) > array([ 0.51588541, ?0.18978364, ?0.02568442, ?0.00944877, ?0.02568442, > ? ? ? ?0.18978364, ?0.51588541, ?0.18978364, ?0.02568442, ?0.00944877]) >>>> stats.vonmises.cdf(np.linspace(0,3*np.pi,10), 2) > array([ ?0.5 ? ? ? , ? 0.89890776, ? 0.98532805, ? 1. ? ? ? ?, > ? ? ? ? 7.28318531, ? 7.28318531, ? 7.28318531, ? 7.28318531, > ? ? ? ? 7.28318531, ?13.56637061]) Unfortunately it seems that my attempt to fix the von Mises distribution (using vonmises_cython) went badly awry, and now it reports nonsense for values outside -pi..pi. I could have sworn I had tests for that. I intend to fix this during the weekend's code sprint. "Correct" behaviour (IMHO) would make the CDF the antiderivative of the PDF, even though this means it leaves [0,1]. Incidentally, how would I get nosetests to run only some of the distributions tests (ideally just the ones I choose)? When I just do "nosetests scipy.stats" it takes *forever*. > ppf, calculated generically, works and looks only a little bit strange. > >>>> stats.vonmises.ppf([0, 1e-10,1-1e-10,1],2) > array([ ? ? ? -Inf, -3.14159264, ?3.14159264, ? ? ? ? Inf]) > >> >> It's worth defining the boundaries, but I don't think you'll get >> useful moment calculations out of it, since circular moments are >> defined differently from linear moments: rather than int(x**n*pdf(x)) >> they're int(exp(2,j*pi*n*x)*pdf(x)). There are explicit formulas for >> these for the von Mises distribution, though, and they're useful >> because they are essentially the Fourier coefficients of the pdf. >> >> For context, I've done some work with representing circular >> probability distributions (= pulse profiles) in terms of their moments >> (= Fourier series), and in particular using a kernel density estimator >> with either a sinc kernel (= truncation) or a von Mises kernel (= >> scaling coefficients by von Mises moments). The purpose was to get >> not-too-biased estimates of the modulation amplitudes of X-ray >> pulsations; the paper has been kind of sidetracked by other work. > > Sounds like statistics for seasonal time series to me, except you > might have a lot more regularity in the repeated pattern then in > economics or climate research. Well, yes and no. Many pulsars have a stable average pulse profile, which is what we usually care about, especially where we're in the regime of one photon every few tens of thousands of turns. On the other hand, when dealing with orbital variability, I ran into a statistical question I wasn't sure how to pose: if I add all the (partial) orbits of data I have together, I get a distribution that is pretty clearly nonconstant. But is that variability actually repeating from orbit to orbit, or is it really just random variability? In a seasonal context it sounds less exotic: if I combine data from the last three years, I find a statistically significant excess of rain in the fall. But is this just the effect of one very rainy season, that happened to be in the fall one year, or is it that each fall there is an excess? The individual years, unfortunately, don't seem to have enough events to detect significant non-uniformity. Anne From silva at lma.cnrs-mrs.fr Wed Oct 21 18:19:40 2009 From: silva at lma.cnrs-mrs.fr (Fabricio Silva) Date: Thu, 22 Oct 2009 00:19:40 +0200 Subject: [SciPy-User] loading mat file in scipy In-Reply-To: <59753.93.37.135.52.1256162610.squirrel@webmail.sissa.it> References: <3312.140.105.40.24.1256052796.squirrel@webmail.sissa.it> <1e2af89e0910201024q1276763aj54d33ca4cb5ee301@mail.gmail.com> <59753.93.37.135.52.1256162610.squirrel@webmail.sissa.it> Message-ID: <1256163580.15372.10.camel@PCTerrusse> Le jeudi 22 octobre 2009 ? 00:03 +0200, Alireza Alemi-Neissi a ?crit : > Thanks for all the comments. > > I upgraded my python (2.6.3) and scipy (0.7.1) to the latest version. > Nevertheless, the mat file which loads in 10s in Matlab, loads in 612s > (~10min) in python. I also set the struct_as_array=True, but it did not > make loading much faster ( 590s). I saved the mat file in version 5 of > matlab instead of version 7, again did not change the whole thing. > > The file is too big (~120MB compressed). But I can tell you what is the > structure of it. > > N is a <1 x 94 struct> > each N.Stim is a <1 x 213 Cell> > > each N.stim has a stuct. Let me write an example of first N and first Stim: > > >>>N(1,1).Stim{1,1} > > NStimPerSet: 1 > Obj: [1x1 struct] > Npres: 7 > times: {1x7 cell} > SpikeTimes: {1x7 cell} > PosInSeq: [4 9 14 20 6 4 9] > TimeBins: [1x24 double] > AvFireRate: [1x24 double] > PlotColor: 'b' > FR: [10.4167 0 0 0 10.4167 10.4167 0] > AFR: 4.4643 > std: 5.5679 > ttest: [1x8 struct] > > > Please note that for all 94 member of N and all 213 member of Stim, this > fields are repeated. > To share the data, I have to get the approval of my supervisor. I will let > you know. > > I am surprise that a 250Mb .mat files generated by ControlDesk (Dspace?) > can be loaded in python so fast. Is the nested structure is as compicated > as mine? No it isn't. But it stores many data signals that fill the 250Mb files. A typical call to io.loadmat is >>> dic = io.loadmat('path/to/file', squeeze_me=True) >>> dic = dic['weird_name'] >>> Fe = 1./(dic.Capture.SamplingPeriod) >>> vec_time = dic.X.Data ... >>> Sig_20 = dic.Y(20,1).Data So not so many nested structures in fact, but big files. > Disappointed with using loadmat, I am heading to use weave.inline to write > a code in C, loading mat file using the tools in mat.h header (hope it > does not get complicated!). > Loading to see seems straightforward. But, I have not figured out yet how > to convert the data loaded in C into python class (any clue?). An old snipplet I used to run was creating the empty array in python and filling it in C with weave: Rx = np.zeros(n, dtype=float) import scipy.weave as weave code = """ int i1; for (int i1=0; i1 Do you have other ideas to make loadmat faster? No, sorry -- Fabrice Silva Laboratory of Mechanics and Acoustics (CNRS, UPR 7051) From josef.pktd at gmail.com Wed Oct 21 19:42:56 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 21 Oct 2009 19:42:56 -0400 Subject: [SciPy-User] Kuiper test (was Re: scipy.stats.fit inquiry) In-Reply-To: References: Message-ID: <1cd32cbb0910211642y3de47cc5n79568dd6befdf5fd@mail.gmail.com> On Wed, Oct 21, 2009 at 6:15 PM, Anne Archibald wrote: > 2009/10/21 ?: > Incidentally, how would I get nosetests to run only some of the > distributions tests (ideally just the ones I choose)? When I just do > "nosetests scipy.stats" it takes *forever*. With nosetests you can also specify the module or even just the name of the test. I used this a lot but don't remember the syntax (with . or with :) try nosetests path/to/test_continuous_basic.py and nosetests path/to/test_continuous_extra.py Since most of the distribution test are generators for the list of distributions, you cannot pick just one distributions without editing the tests. When testing a specific distribution, I usually overwrote the list of distributions, see the commented out examples in http://projects.scipy.org/scipy/browser/trunk/scipy/stats/tests/test_continuous_basic.py#L136 test_continuous_extra.py reads the list of distributions to be tested from test_continuous_basic.py vonmises is classified as slow, and run only with "full", which is not the default for scipy.test(), but is the default for using nosetests on the command line. There is a command line argument for nosetests (??"not-slow") to skip the tests decorated as slow. Hope that helps. Josef > Anne > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From aisaac at american.edu Wed Oct 21 21:46:55 2009 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 21 Oct 2009 21:46:55 -0400 Subject: [SciPy-User] ANN: graph-tool, a package for graph analysis and manipulation In-Reply-To: References: Message-ID: <4ADFB98F.6020108@american.edu> On 10/5/2009 12:22 PM, Tiago de Paula Peixoto wrote: > http://graph-tool.forked.de So is the idea, in contrast say to NetworkX, that graph-tool wraps the Boost Graph Library? If so, why is the license more restrictive than the BGL? Thanks, Alan Isaac From dwf at cs.toronto.edu Wed Oct 21 23:42:16 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 21 Oct 2009 23:42:16 -0400 Subject: [SciPy-User] Image analysis: Counting masked areas In-Reply-To: References: Message-ID: <20091022034216.GA29801@rodimus> On Wed, Oct 21, 2009 at 12:12:00AM -0700, Michael Aye wrote: > Dear all, > is it possible to count the areas in an image, that have been masked > by some condition? > I need to count all areas in an image darker than a certain value. > Masking is no problem, but how do I count independent areas? I was > thinking just to look at the masked array, but then a pixel-by-pixel > check if the neighbour pixel is masked as well to find *independent* > masked areas sees numerically very costly. > Isn't there a better way? In Python, it would be, yes. Have a look at scipy.ndimage.label(), it does exactly this. David From etrade.griffiths at dsl.pipex.com Thu Oct 22 04:17:16 2009 From: etrade.griffiths at dsl.pipex.com (Etrade Griffiths) Date: Thu, 22 Oct 2009 09:17:16 +0100 Subject: [SciPy-User] Newbie Question :: Natural Cubic Spline In-Reply-To: References: Message-ID: <87rgru$64eu21@smtp.pipex.tiscali.co.uk> >Hello: > >I am new to Python and the SciPy libraries and I was wondering if >someone could help me with the following. > >Given the following x and y : values > >x_list = >[33, 56, 56.00000000002, 147, 238, 329, > 420, 511, 602, 693, 791] > >y_list = >[0.99974, 0.99949, 0.99949, >0.99816,0.99631,0.99383,0.99043,0.98610,0.98078,0.97460,0.96704] > > >I want to use a Natural Cubic Spline in order to determine the >points at the following values: > >interp_x_points = >[31,62,92,123,153,184,215,243,274,304,335,365,396,427,457,488,518,549,580,608,639,669,700] > > >I have looked through the documentation and have had a tough time >determining the correct syntax or which function to use. > >I was wondering what would be the SciPy syntax for the following: >Also, does scipy have the ability to extrapolate if a give X value >is outside a specified range? >In this example, please notice interp_x_poiints has a value (31) >which is outside of the x_list range. Try this: # Example use of cubic spline import scipy.interpolate x_list = [33, 56, 56.00000000002, 147, 238, 329, 420, 511, 602, 693, 791] y_list = [0.99974, 0.99949, 0.99949,0.99816,0.99631,0.99383,0.99043,0.98610,0.98078,0.97460,0.96704] interp_x_points = [31,62,92,123,153,184,215,243,274,304,335,365,396,427,457,488,518,549,580,608,639,669,700] # Get the knot points (s=0.0 => "natural" splines) tck = scipy.interpolate.splrep(x_list, y_list, s=0.0) for xval in interp_x_points: try: yval = scipy.interpolate.splev(xval,tck,der=0) except: yval = -1.0E30 print "x value: %5.1f; y value: %7.5f" % (xval, yval) Spline fitting is a two step process: first calculate the knot points and then do the interpolation. Press et al have a nice description of the process here (Chapter 3.3) http://www.nrbook.com/a/bookcpdf.php Note that you may have to install the free plug in mentioned on this web page HTH Alun Griffiths From robince at gmail.com Thu Oct 22 04:44:37 2009 From: robince at gmail.com (Robin) Date: Thu, 22 Oct 2009 09:44:37 +0100 Subject: [SciPy-User] loading mat file in scipy In-Reply-To: <1e2af89e0910201024q1276763aj54d33ca4cb5ee301@mail.gmail.com> References: <3312.140.105.40.24.1256052796.squirrel@webmail.sissa.it> <1e2af89e0910201024q1276763aj54d33ca4cb5ee301@mail.gmail.com> Message-ID: <2d5132a50910220144h6fb57d73red4b1d0e22199aa0@mail.gmail.com> On Tue, Oct 20, 2009 at 6:24 PM, Matthew Brett wrote: > > If you are using 0.7.1, then maybe you can let me have a look at your > file somehow - maybe by sharing a dropbox folder? Are you able to look at the file I provided: http://www.robince.net/robince/structs_cells.mat At 900K I think it is a more reasonable size for testing, and in terms of s/MB performs a lot slower than the OPs original example. I'm sure the root cause for any slowness would be the same - it also has structs of cells with structs nested many times. Cheers Robin From stefan at sun.ac.za Thu Oct 22 05:29:17 2009 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 22 Oct 2009 11:29:17 +0200 Subject: [SciPy-User] ANN: SciPy October Sprint In-Reply-To: <9457e7c80910220228v15638e9awc4b8096b8d7e960e@mail.gmail.com> References: <9457e7c80910070446k2c1ae895u4011d242abc224b@mail.gmail.com> <9457e7c80910220228v15638e9awc4b8096b8d7e960e@mail.gmail.com> Message-ID: <9457e7c80910220229o50668ddeq8ceeba67dde3cc70@mail.gmail.com> Hi all, The weekend is just around the corner, and we're looking forward to the sprint! ?Here is the detail again: """ Our patch queue keeps getting longer and longer, so here is an opportunity to do some spring cleaning (it's spring in South Africa, at least)! Please join us for an October SciPy sprint: ? ?* Date: 24/25 October 2009 (Sat/Sun) ? ?* More information: http://projects.scipy.org/scipy/wiki/SciPySprint200910 We are looking for volunteers to write documentation, review code, fix bugs or design marketing material. New contributors are most welcome, and mentoring will be available. """ See you there, Regards St?fan From alemi at sissa.it Thu Oct 22 11:59:22 2009 From: alemi at sissa.it (Alireza Alemi-Neissi) Date: Thu, 22 Oct 2009 17:59:22 +0200 (CEST) Subject: [SciPy-User] loading mat file in scipy Message-ID: <63321.93.37.128.116.1256227162.squirrel@webmail.sissa.it> Robin wrote: > On Tue, Oct 20, 2009 at 6:24 PM, Matthew Brett wrote: >> If you are using 0.7.1, then maybe you can let me have a look at your >> file somehow - maybe by sharing a dropbox folder? > Here is the link of the data: http://rapidshare.com/files/296444142/nestedStructedSyptom.mat.html > Are you able to look at the file I provided: > http://www.robince.net/robince/structs_cells.mat > > At 900K I think it is a more reasonable size for testing, and in terms > of s/MB performs a lot slower than the OPs original example. I'm sure > the root cause for any slowness would be the same - it also has > structs of cells with structs nested many times. > I am sure the cause of slow loading of your file and that of mine are the same. It took ~56sec on my computer to load your data into the python. Cheers, Alireza ---------------------------------------------------------------- SISSA Webmail https://webmail.sissa.it/ Powered by SquirrelMail http://www.squirrelmail.org/ From dcswest at gmail.com Thu Oct 22 15:07:18 2009 From: dcswest at gmail.com (Dennis C) Date: Thu, 22 Oct 2009 12:07:18 -0700 Subject: [SciPy-User] maximization functions Message-ID: Greetings; Although I've been using numpy on a limited basis for a while now, just recently came across scipy in a search for any "more standardized" python optimization functions such as parabolic interpolation and a genetic algorithm that might just work more reliably than what I've written up thus far for either and seems like all I've found is the inverse of the former and future plans for the latter, so basically am wondering if there's already any maximization functions too in scipy as there seem to be minimization ones? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Oct 22 15:33:48 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 Oct 2009 14:33:48 -0500 Subject: [SciPy-User] maximization functions In-Reply-To: References: Message-ID: <3d375d730910221233q57a1ee85r621fae1e53151c10@mail.gmail.com> On Thu, Oct 22, 2009 at 14:07, Dennis C wrote: > Greetings; > Although I've been using numpy on a limited basis for a while now, just > recently came across scipy in a search for any "more standardized" python > optimization functions such as parabolic interpolation and a genetic > algorithm that might just work more reliably than what I've written up thus > far for either and seems like all I've found is the inverse of the former > and future plans for the latter, so basically am wondering if there's > already any maximization functions too in scipy as there seem to be > minimization ones? To convert a maximization problem to a minimization problem: def func_to_minimize(x): return -func_to_maximize(x) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From maynard at bu.edu Thu Oct 22 15:54:14 2009 From: maynard at bu.edu (Kris Maynard) Date: Thu, 22 Oct 2009 15:54:14 -0400 Subject: [SciPy-User] curve_fit and least squares In-Reply-To: References: <9d5f455c0910062236p48a66705j8bd55484ebb04f12@mail.gmail.com> <1cd32cbb0910062319o2c313f94nb436b04a6da3b4da@mail.gmail.com> Message-ID: <9d5f455c0910221254i11b4cc8cm6446153bfd7b245c@mail.gmail.com> Hi, Thanks for your responses. After some more digging and some more testing I'm beginning to think that the algorithm used by curve_fit simply isn't robust enough for the data that I am trying to fit. Below is an example of some experimental radioactive decay data that I am trying to fit to an exponential decay. #!/usr/bin/env python import numpy as np import scipy as sp import pylab as pl from scipy.optimize.minpack import curve_fit x = [ 50., 110., 170., 230., 290., 350., 410., 470., 530., 590.] y = [ 3173., 2391., 1726., 1388., 1057., 786., 598., 443., 339., 263.] smoothx = np.linspace(x[0], x[-1], 20) guess_a, guess_b, guess_c = 4000, -0.005, 100 guess = [guess_a, guess_b, guess_c] f_theory1 = lambda t, a, b, c: a * np.exp(b * t) + c f_theory2 = lambda t, a, b: np.exp(a * t) + b pl.plot(x, y, 'b.', smoothx, f_theory1(smoothx, guess_a, guess_b, guess_c)) pl.show() p, cov = curve_fit(f_theory1, x, y) #p, cov = curve_fit(f_theory2, x, y) # the following gives: # ValueError: shape mismatch: objects cannot be broadcast to a single shape #p, cov = curve_fit(f_theory1, x, y, p0=guess) pl.clf() f_fit1 = lambda t: p[0] * np.exp(p[1] * t) + p[2] #f_fit2 = lambda t: np.exp(p[0] * t) + p[1] pl.plot(x, y, 'b.', smoothx, f_fit1(smoothx), 'k-') pl.show() ## ## EOF ## As you can see, I have tried to fit using 2 or 3 parameters with no luck. Is there something that I could do to make this work? I have tried this exact thing in matlab and it worked the first time. Unfortunately, I would really like to use python as I find it in general more intuitive than matlab. Thanks, ~Kris On Wed, Oct 7, 2009 at 9:40 AM, Bruce Southey wrote: > On Wed, Oct 7, 2009 at 1:19 AM, wrote: > > On Wed, Oct 7, 2009 at 1:36 AM, Kris Maynard wrote: > >> I am having trouble with fitting data to an exponential curve. I have an > x-y > >> data series that I would like to fit to an exponential using least > squares > >> and have access to the covariance matrix of the result. I summarize my > >> problem in the following example: > >> > >> import numpy as np > >> import scipy as sp > >> from scipy.optimize.minpack import curve_fit > >> > >> A, B = 5, 0.5 > >> x = np.linspace(0, 5, 10) > >> real_f = lambda x: A * np.exp(-1.0 * B * x) > >> y = real_f(x) > >> ynoisy = y + 0.01 * np.random.randn(len(x)) > >> > >> exp_f = lambda x, a, b: a * np.exp(-1.0 * b * x) > >> > >> # this line raises the error: > >> > >> # RuntimeError: Optimal parameters not found: Both > >> > >> # actual and predicted relative reductions in the sum of squares > >> > >> # are at most 0.000000 and the relative error between two > >> > >> # consecutive iterates is at most 0.000000 > >> > >> params, cov = curve_fit(exp_f, x, ynoisy) > > > > Could you please first plot your data? > As you would see, the curve is very poorly defined with those model > parameters and range. So you are asking a lot from your model and > data. At least you need a wider range with those parameters or Josef > says different parameter(s): > > > this might be the same as http://projects.scipy.org/scipy/ticket/984and > > http://mail.scipy.org/pipermail/scipy-user/2009-August/022090.html > > > > If I increase your noise standard deviation from 0.1 to 0.2 then I do get > > correct estimation results in your example. > > > >> > >> I have tried to use the minpack.leastsq function directly with similar > >> results. I also tried taking the log and fitting to a line with no > success. > >> The results are the same using scipy 0.7.1 as well as 0.8.0.dev5953. Am > I > >> not using the curve_fit function correctly? > > > > With minpack.leastsq error code 2 should be just a warning. If you > get > > incorrect parameter estimates with optimize.leastsq, besides the warning, > could > > you post the example so I can have a look. > > > > It looks like if you take logs then you would have a problem that is > linear in > > (transformed) parameters, where you could use linear least squares if you > > just want a fit without the standard errors of the original parameters > > (constant) > > The errors will be multiplicative rather than additive. > > Bruce > > > > > I hope that helps. > > > > Josef > > > > > >> Thanks, > >> ~Kris > >> -- > >> Heisenberg went for a drive and got stopped by a traffic cop. The cop > asked, > >> "Do you know how fast you were going?" Heisenberg replied, "No, but I > know > >> where I am." > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > >> > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > Hi, > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Heisenberg went for a drive and got stopped by a traffic cop. The cop asked, "Do you know how fast you were going?" Heisenberg replied, "No, but I know where I am." -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Thu Oct 22 16:09:04 2009 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 22 Oct 2009 16:09:04 -0400 Subject: [SciPy-User] curve_fit and least squares In-Reply-To: <9d5f455c0910221254i11b4cc8cm6446153bfd7b245c@mail.gmail.com> References: <9d5f455c0910062236p48a66705j8bd55484ebb04f12@mail.gmail.com> <1cd32cbb0910062319o2c313f94nb436b04a6da3b4da@mail.gmail.com> <9d5f455c0910221254i11b4cc8cm6446153bfd7b245c@mail.gmail.com> Message-ID: On Thu, Oct 22, 2009 at 3:54 PM, Kris Maynard wrote: > Hi, > Thanks for your responses. After some more digging and some more testing I'm > beginning to think that the algorithm used by curve_fit simply isn't robust > enough for the data that I am trying to fit. Below is an example of some > experimental radioactive decay data that I am trying to fit to an > exponential decay. > #!/usr/bin/env python > import numpy as np > import scipy as sp > import pylab as pl > from scipy.optimize.minpack import curve_fit > x = [ ?50., ?110., ?170., ?230., ?290., ?350., ?410., ?470., ?530., ?590.] > y = [ 3173., ?2391., ?1726., ?1388., ?1057., ? 786., ? 598., ? 443., ? 339., > ? 263.] > smoothx = np.linspace(x[0], x[-1], 20) > guess_a, guess_b, guess_c = 4000, -0.005, 100 > guess = [guess_a, guess_b, guess_c] > f_theory1 = lambda t, a, b, c: a * np.exp(b * t) + c > f_theory2 = lambda t, a, b: np.exp(a * t) + b > pl.plot(x, y, 'b.', smoothx, f_theory1(smoothx, guess_a, guess_b, guess_c)) > pl.show() > p, cov = curve_fit(f_theory1, x, y) > #p, cov = curve_fit(f_theory2, x, y) > # the following gives: > # ? ValueError: shape mismatch: objects cannot be broadcast to a single > shape > #p, cov = curve_fit(f_theory1, x, y, p0=guess) > pl.clf() > f_fit1 = lambda t: p[0] * np.exp(p[1] * t) + p[2] > #f_fit2 = lambda t: np.exp(p[0] * t) + p[1] > pl.plot(x, y, 'b.', smoothx, f_fit1(smoothx), 'k-') > pl.show() > ## > ## EOF > ## > As you can see, I have tried to fit using 2 or 3 parameters with no luck. Is > there something that I could do to make this work? I have tried this exact > thing in matlab and it worked the first time. Unfortunately, I would really > like to use python as I find it in general more intuitive than matlab. > Thanks, > ~Kris I didn't look too closely, but the error message suggests that your fit_theory functions return an array that is not the right dimension compared to y Consider: f_theory1(smoothx, guess_a, guess_b, guess_c) - y #ValueError: shape mismatch: objects cannot be broadcast to a single shape f_theory1(smoothx, guess_a, guess_b, guess_c).shape # (20,) # of type array type(y) # list This return an array, but your y is still a list, so In [52]: curve_fit(f_theory1, np.array(x), np.array(y), p0=(guess_a, guess_b, guess_c)) Out[52]: (array([ 3.97627120e+03, -4.76859072e-03, 2.93930899e+01]), array([[ 2.03195757e+03, -1.17944031e-03, -3.33962266e+02], [ -1.17944031e-03, 3.14507326e-08, -6.91762164e-03], [ -3.33962266e+02, -6.91762164e-03, 1.80377505e+03]])) I have no idea if this result makes sense, but it might point you in the right direction. Skipper > On Wed, Oct 7, 2009 at 9:40 AM, Bruce Southey wrote: >> >> On Wed, Oct 7, 2009 at 1:19 AM, ? wrote: >> > On Wed, Oct 7, 2009 at 1:36 AM, Kris Maynard wrote: >> >> I am having trouble with fitting data to an exponential curve. I have >> >> an x-y >> >> data series that I would like to fit to an exponential using least >> >> squares >> >> and have access to the covariance matrix of the result. I summarize my >> >> problem in the following example: >> >> >> >> import numpy as np >> >> import scipy as sp >> >> from scipy.optimize.minpack import curve_fit >> >> >> >> A, B = 5, 0.5 >> >> x = np.linspace(0, 5, 10) >> >> real_f = lambda x: A * np.exp(-1.0 * B * x) >> >> y = real_f(x) >> >> ynoisy = y + 0.01 * np.random.randn(len(x)) >> >> >> >> exp_f = lambda x, a, b: a * np.exp(-1.0 * b * x) >> >> >> >> # this line raises the error: >> >> >> >> #? RuntimeError: Optimal parameters not found: Both >> >> >> >> #? actual and predicted relative reductions in the sum of squares >> >> >> >> #? are at most 0.000000 and the relative error between two >> >> >> >> #? consecutive iterates is at most 0.000000 >> >> >> >> params, cov = curve_fit(exp_f, x, ynoisy) >> > >> >> Could you please first plot your data? >> As you would see, the curve is very poorly defined with those model >> parameters and range. So you are asking a lot from your model and >> data. At least you need a wider range with those parameters or Josef >> says different parameter(s): >> >> > this might be the same as ?http://projects.scipy.org/scipy/ticket/984 >> > and >> > http://mail.scipy.org/pipermail/scipy-user/2009-August/022090.html >> > >> > If I increase your noise standard deviation from 0.1 to 0.2 then I do >> > get >> > correct estimation results in your example. >> > >> >> >> >> I have tried to use the minpack.leastsq function directly with similar >> >> results. I also tried taking the log and fitting to a line with no >> >> success. >> >> The results are the same using scipy 0.7.1 as well as 0.8.0.dev5953. Am >> >> I >> >> not using the curve_fit function correctly? >> > >> > With ? minpack.leastsq ? error code 2 should be just a warning. If you >> > get >> > incorrect parameter estimates with optimize.leastsq, besides the >> > warning, could >> > you post the example so I can have a look. >> > >> > It looks like if you take logs then you would have a problem that is >> > linear in >> > (transformed) parameters, where you could use linear least squares if >> > you >> > just want a fit without the standard errors of the original parameters >> > (constant) >> >> The errors will be multiplicative rather than additive. >> >> Bruce >> >> > >> > I hope that helps. >> > >> > Josef >> > >> > >> >> Thanks, >> >> ~Kris >> >> -- >> >> Heisenberg went for a drive and got stopped by a traffic cop. The cop >> >> asked, >> >> "Do you know how fast you were going?" Heisenberg replied, "No, but I >> >> know >> >> where I am." >> >> >> >> _______________________________________________ >> >> SciPy-User mailing list >> >> SciPy-User at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> >> Hi, >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > Heisenberg went for a drive and got stopped by a traffic cop. The cop asked, > "Do you know how fast you were going?" Heisenberg replied, "No, but I know > where I am." > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From warren.weckesser at enthought.com Thu Oct 22 16:12:10 2009 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Thu, 22 Oct 2009 15:12:10 -0500 Subject: [SciPy-User] curve_fit and least squares In-Reply-To: <9d5f455c0910221254i11b4cc8cm6446153bfd7b245c@mail.gmail.com> References: <9d5f455c0910062236p48a66705j8bd55484ebb04f12@mail.gmail.com> <1cd32cbb0910062319o2c313f94nb436b04a6da3b4da@mail.gmail.com> <9d5f455c0910221254i11b4cc8cm6446153bfd7b245c@mail.gmail.com> Message-ID: <4AE0BC9A.1000205@enthought.com> Kris, Your script worked for me if I explicitly converted everything to numpy arrays. Here's my edited version: ---------- #!/usr/bin/env python import numpy as np import scipy as sp import pylab as pl from scipy.optimize.minpack import curve_fit x = np.array([ 50., 110., 170., 230., 290., 350., 410., 470., 530., 590.]) y = np.array([ 3173., 2391., 1726., 1388., 1057., 786., 598., 443., 339., 263.]) smoothx = np.linspace(x[0], x[-1], 20) guess_a, guess_b, guess_c = 4000, -0.005, 100 guess = [guess_a, guess_b, guess_c] f_theory1 = lambda t, a, b, c: a * np.exp(b * t) + c p, cov = curve_fit(f_theory1, x, y, p0=np.array(guess)) pl.clf() f_fit1 = lambda t: p[0] * np.exp(p[1] * t) + p[2] # pl.plot(x, y, 'b.', smoothx, f_theory1(smoothx, guess_a, guess_b, guess_c)) pl.plot(x, y, 'b.', smoothx, f_fit1(smoothx), 'r-') pl.show() ## ## EOF ## ---------- Warren Kris Maynard wrote: > Hi, > > Thanks for your responses. After some more digging and some more > testing I'm beginning to think that the algorithm used by curve_fit > simply isn't robust enough for the data that I am trying to fit. Below > is an example of some experimental radioactive decay data that I am > trying to fit to an exponential decay. > > #!/usr/bin/env python > import numpy as np > import scipy as sp > import pylab as pl > from scipy.optimize.minpack import curve_fit > > x = [ 50., 110., 170., 230., 290., 350., 410., 470., 530., 590.] > y = [ 3173., 2391., 1726., 1388., 1057., 786., 598., 443., > 339., 263.] > > smoothx = np.linspace(x[0], x[-1], 20) > guess_a, guess_b, guess_c = 4000, -0.005, 100 > guess = [guess_a, guess_b, guess_c] > > f_theory1 = lambda t, a, b, c: a * np.exp(b * t) + c > f_theory2 = lambda t, a, b: np.exp(a * t) + b > > pl.plot(x, y, 'b.', smoothx, f_theory1(smoothx, guess_a, guess_b, > guess_c)) > pl.show() > > p, cov = curve_fit(f_theory1, x, y) > #p, cov = curve_fit(f_theory2, x, y) > > # the following gives: > # ValueError: shape mismatch: objects cannot be broadcast to a > single shape > #p, cov = curve_fit(f_theory1, x, y, p0=guess) > > pl.clf() > f_fit1 = lambda t: p[0] * np.exp(p[1] * t) + p[2] > #f_fit2 = lambda t: np.exp(p[0] * t) + p[1] > pl.plot(x, y, 'b.', smoothx, f_fit1(smoothx), 'k-') > pl.show() > > ## > ## EOF > ## > > As you can see, I have tried to fit using 2 or 3 parameters with no > luck. Is there something that I could do to make this work? I have > tried this exact thing in matlab and it worked the first time. > Unfortunately, I would really like to use python as I find it in > general more intuitive than matlab. > > Thanks, > ~Kris > > On Wed, Oct 7, 2009 at 9:40 AM, Bruce Southey > wrote: > > On Wed, Oct 7, 2009 at 1:19 AM, > wrote: > > On Wed, Oct 7, 2009 at 1:36 AM, Kris Maynard > wrote: > >> I am having trouble with fitting data to an exponential curve. > I have an x-y > >> data series that I would like to fit to an exponential using > least squares > >> and have access to the covariance matrix of the result. I > summarize my > >> problem in the following example: > >> > >> import numpy as np > >> import scipy as sp > >> from scipy.optimize.minpack import curve_fit > >> > >> A, B = 5, 0.5 > >> x = np.linspace(0, 5, 10) > >> real_f = lambda x: A * np.exp(-1.0 * B * x) > >> y = real_f(x) > >> ynoisy = y + 0.01 * np.random.randn(len(x)) > >> > >> exp_f = lambda x, a, b: a * np.exp(-1.0 * b * x) > >> > >> # this line raises the error: > >> > >> # RuntimeError: Optimal parameters not found: Both > >> > >> # actual and predicted relative reductions in the sum of squares > >> > >> # are at most 0.000000 and the relative error between two > >> > >> # consecutive iterates is at most 0.000000 > >> > >> params, cov = curve_fit(exp_f, x, ynoisy) > > > > Could you please first plot your data? > As you would see, the curve is very poorly defined with those model > parameters and range. So you are asking a lot from your model and > data. At least you need a wider range with those parameters or Josef > says different parameter(s): > > > this might be the same as > http://projects.scipy.org/scipy/ticket/984 and > > http://mail.scipy.org/pipermail/scipy-user/2009-August/022090.html > > > > If I increase your noise standard deviation from 0.1 to 0.2 then > I do get > > correct estimation results in your example. > > > >> > >> I have tried to use the minpack.leastsq function directly with > similar > >> results. I also tried taking the log and fitting to a line with > no success. > >> The results are the same using scipy 0.7.1 as well as > 0.8.0.dev5953. Am I > >> not using the curve_fit function correctly? > > > > With minpack.leastsq error code 2 should be just a warning. > If you get > > incorrect parameter estimates with optimize.leastsq, besides the > warning, could > > you post the example so I can have a look. > > > > It looks like if you take logs then you would have a problem > that is linear in > > (transformed) parameters, where you could use linear least > squares if you > > just want a fit without the standard errors of the original > parameters > > (constant) > > The errors will be multiplicative rather than additive. > > Bruce > > > > > I hope that helps. > > > > Josef > > > > > >> Thanks, > >> ~Kris > >> -- > >> Heisenberg went for a drive and got stopped by a traffic cop. > The cop asked, > >> "Do you know how fast you were going?" Heisenberg replied, "No, > but I know > >> where I am." > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > >> > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > Hi, > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > -- > Heisenberg went for a drive and got stopped by a traffic cop. The cop > asked, "Do you know how fast you were going?" Heisenberg replied, "No, > but I know where I am." > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From maynard at bu.edu Thu Oct 22 16:20:50 2009 From: maynard at bu.edu (Kris Maynard) Date: Thu, 22 Oct 2009 16:20:50 -0400 Subject: [SciPy-User] curve_fit and least squares In-Reply-To: <4AE0BC9A.1000205@enthought.com> References: <9d5f455c0910062236p48a66705j8bd55484ebb04f12@mail.gmail.com> <1cd32cbb0910062319o2c313f94nb436b04a6da3b4da@mail.gmail.com> <9d5f455c0910221254i11b4cc8cm6446153bfd7b245c@mail.gmail.com> <4AE0BC9A.1000205@enthought.com> Message-ID: <9d5f455c0910221320n567924f7n80698d7966808a4@mail.gmail.com> Oops. Right you both are. I suppose the answer to my general confusion is that jimmying the initial fit parameters in the right way will make curve_fit work. Thanks! ~Kris On Thu, Oct 22, 2009 at 4:12 PM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > Kris, > > Your script worked for me if I explicitly converted everything to numpy > arrays. Here's my edited version: > > ---------- > #!/usr/bin/env python > import numpy as np > import scipy as sp > import pylab as pl > from scipy.optimize.minpack import curve_fit > > x = np.array([ 50., 110., 170., 230., 290., 350., 410., 470., > 530., 590.]) > y = np.array([ 3173., 2391., 1726., 1388., 1057., 786., 598., > 443., 339., 263.]) > > smoothx = np.linspace(x[0], x[-1], 20) > guess_a, guess_b, guess_c = 4000, -0.005, 100 > guess = [guess_a, guess_b, guess_c] > > f_theory1 = lambda t, a, b, c: a * np.exp(b * t) + c > > p, cov = curve_fit(f_theory1, x, y, p0=np.array(guess)) > > pl.clf() > f_fit1 = lambda t: p[0] * np.exp(p[1] * t) + p[2] > # pl.plot(x, y, 'b.', smoothx, f_theory1(smoothx, guess_a, guess_b, > guess_c)) > pl.plot(x, y, 'b.', smoothx, f_fit1(smoothx), 'r-') > pl.show() > > ## > ## EOF > ## > > ---------- > > Warren > > > Kris Maynard wrote: > > Hi, > > > > Thanks for your responses. After some more digging and some more > > testing I'm beginning to think that the algorithm used by curve_fit > > simply isn't robust enough for the data that I am trying to fit. Below > > is an example of some experimental radioactive decay data that I am > > trying to fit to an exponential decay. > > > > #!/usr/bin/env python > > import numpy as np > > import scipy as sp > > import pylab as pl > > from scipy.optimize.minpack import curve_fit > > > > x = [ 50., 110., 170., 230., 290., 350., 410., 470., 530., > 590.] > > y = [ 3173., 2391., 1726., 1388., 1057., 786., 598., 443., > > 339., 263.] > > > > smoothx = np.linspace(x[0], x[-1], 20) > > guess_a, guess_b, guess_c = 4000, -0.005, 100 > > guess = [guess_a, guess_b, guess_c] > > > > f_theory1 = lambda t, a, b, c: a * np.exp(b * t) + c > > f_theory2 = lambda t, a, b: np.exp(a * t) + b > > > > pl.plot(x, y, 'b.', smoothx, f_theory1(smoothx, guess_a, guess_b, > > guess_c)) > > pl.show() > > > > p, cov = curve_fit(f_theory1, x, y) > > #p, cov = curve_fit(f_theory2, x, y) > > > > # the following gives: > > # ValueError: shape mismatch: objects cannot be broadcast to a > > single shape > > #p, cov = curve_fit(f_theory1, x, y, p0=guess) > > > > pl.clf() > > f_fit1 = lambda t: p[0] * np.exp(p[1] * t) + p[2] > > #f_fit2 = lambda t: np.exp(p[0] * t) + p[1] > > pl.plot(x, y, 'b.', smoothx, f_fit1(smoothx), 'k-') > > pl.show() > > > > ## > > ## EOF > > ## > > > > As you can see, I have tried to fit using 2 or 3 parameters with no > > luck. Is there something that I could do to make this work? I have > > tried this exact thing in matlab and it worked the first time. > > Unfortunately, I would really like to use python as I find it in > > general more intuitive than matlab. > > > > Thanks, > > ~Kris > > > > On Wed, Oct 7, 2009 at 9:40 AM, Bruce Southey > > wrote: > > > > On Wed, Oct 7, 2009 at 1:19 AM, > > wrote: > > > On Wed, Oct 7, 2009 at 1:36 AM, Kris Maynard > > wrote: > > >> I am having trouble with fitting data to an exponential curve. > > I have an x-y > > >> data series that I would like to fit to an exponential using > > least squares > > >> and have access to the covariance matrix of the result. I > > summarize my > > >> problem in the following example: > > >> > > >> import numpy as np > > >> import scipy as sp > > >> from scipy.optimize.minpack import curve_fit > > >> > > >> A, B = 5, 0.5 > > >> x = np.linspace(0, 5, 10) > > >> real_f = lambda x: A * np.exp(-1.0 * B * x) > > >> y = real_f(x) > > >> ynoisy = y + 0.01 * np.random.randn(len(x)) > > >> > > >> exp_f = lambda x, a, b: a * np.exp(-1.0 * b * x) > > >> > > >> # this line raises the error: > > >> > > >> # RuntimeError: Optimal parameters not found: Both > > >> > > >> # actual and predicted relative reductions in the sum of squares > > >> > > >> # are at most 0.000000 and the relative error between two > > >> > > >> # consecutive iterates is at most 0.000000 > > >> > > >> params, cov = curve_fit(exp_f, x, ynoisy) > > > > > > > Could you please first plot your data? > > As you would see, the curve is very poorly defined with those model > > parameters and range. So you are asking a lot from your model and > > data. At least you need a wider range with those parameters or Josef > > says different parameter(s): > > > > > this might be the same as > > http://projects.scipy.org/scipy/ticket/984 and > > > http://mail.scipy.org/pipermail/scipy-user/2009-August/022090.html > > > > > > If I increase your noise standard deviation from 0.1 to 0.2 then > > I do get > > > correct estimation results in your example. > > > > > >> > > >> I have tried to use the minpack.leastsq function directly with > > similar > > >> results. I also tried taking the log and fitting to a line with > > no success. > > >> The results are the same using scipy 0.7.1 as well as > > 0.8.0.dev5953. Am I > > >> not using the curve_fit function correctly? > > > > > > With minpack.leastsq error code 2 should be just a warning. > > If you get > > > incorrect parameter estimates with optimize.leastsq, besides the > > warning, could > > > you post the example so I can have a look. > > > > > > It looks like if you take logs then you would have a problem > > that is linear in > > > (transformed) parameters, where you could use linear least > > squares if you > > > just want a fit without the standard errors of the original > > parameters > > > (constant) > > > > The errors will be multiplicative rather than additive. > > > > Bruce > > > > > > > > I hope that helps. > > > > > > Josef > > > > > > > > >> Thanks, > > >> ~Kris > > >> -- > > >> Heisenberg went for a drive and got stopped by a traffic cop. > > The cop asked, > > >> "Do you know how fast you were going?" Heisenberg replied, "No, > > but I know > > >> where I am." > > >> > > >> _______________________________________________ > > >> SciPy-User mailing list > > >> SciPy-User at scipy.org > > >> http://mail.scipy.org/mailman/listinfo/scipy-user > > >> > > >> > > > _______________________________________________ > > > SciPy-User mailing list > > > SciPy-User at scipy.org > > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > Hi, > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > -- > > Heisenberg went for a drive and got stopped by a traffic cop. The cop > > asked, "Do you know how fast you were going?" Heisenberg replied, "No, > > but I know where I am." > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Heisenberg went for a drive and got stopped by a traffic cop. The cop asked, "Do you know how fast you were going?" Heisenberg replied, "No, but I know where I am." -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Oct 22 16:29:09 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 22 Oct 2009 16:29:09 -0400 Subject: [SciPy-User] curve_fit and least squares In-Reply-To: <9d5f455c0910221320n567924f7n80698d7966808a4@mail.gmail.com> References: <9d5f455c0910062236p48a66705j8bd55484ebb04f12@mail.gmail.com> <1cd32cbb0910062319o2c313f94nb436b04a6da3b4da@mail.gmail.com> <9d5f455c0910221254i11b4cc8cm6446153bfd7b245c@mail.gmail.com> <4AE0BC9A.1000205@enthought.com> <9d5f455c0910221320n567924f7n80698d7966808a4@mail.gmail.com> Message-ID: <1cd32cbb0910221329t2f2d98bdgf1baafe60963dcf6@mail.gmail.com> On Thu, Oct 22, 2009 at 4:20 PM, Kris Maynard wrote: > Oops. > > Right you both are. I suppose the answer to my general confusion is that > jimmying the initial fit parameters in the right way will make curve_fit > work. Thanks! > > ~Kris > > On Thu, Oct 22, 2009 at 4:12 PM, Warren Weckesser > wrote: >> >> Kris, >> >> Your script worked for me if I explicitly converted everything to numpy >> arrays. ?Here's my edited version: >> >> ---------- >> #!/usr/bin/env python >> import numpy as np >> import scipy as sp >> import pylab as pl >> from scipy.optimize.minpack import curve_fit >> >> x = np.array([ ?50., ?110., ?170., ?230., ?290., ?350., ?410., ?470., >> 530., ?590.]) >> y = np.array([ 3173., ?2391., ?1726., ?1388., ?1057., ? 786., ? 598., >> 443., ? 339., ? 263.]) >> >> smoothx = np.linspace(x[0], x[-1], 20) >> guess_a, guess_b, guess_c = 4000, -0.005, 100 >> guess = [guess_a, guess_b, guess_c] >> >> f_theory1 = lambda t, a, b, c: a * np.exp(b * t) + c >> >> p, cov = curve_fit(f_theory1, x, y, p0=np.array(guess)) >> >> pl.clf() >> f_fit1 = lambda t: p[0] * np.exp(p[1] * t) + p[2] >> # pl.plot(x, y, 'b.', smoothx, f_theory1(smoothx, guess_a, guess_b, >> guess_c)) >> pl.plot(x, y, 'b.', smoothx, f_fit1(smoothx), 'r-') >> pl.show() >> >> ## >> ## EOF >> ## >> >> ---------- >> >> Warren >> >> >> Kris Maynard wrote: >> > Hi, >> > >> > Thanks for your responses. After some more digging and some more >> > testing I'm beginning to think that the algorithm used by curve_fit >> > simply isn't robust enough for the data that I am trying to fit. Below >> > is an example of some experimental radioactive decay data that I am >> > trying to fit to an exponential decay. >> > >> > #!/usr/bin/env python >> > import numpy as np >> > import scipy as sp >> > import pylab as pl >> > from scipy.optimize.minpack import curve_fit >> > >> > x = [ ?50., ?110., ?170., ?230., ?290., ?350., ?410., ?470., ?530., >> > ?590.] >> > y = [ 3173., ?2391., ?1726., ?1388., ?1057., ? 786., ? 598., ? 443., >> > 339., ? 263.] >> > >> > smoothx = np.linspace(x[0], x[-1], 20) >> > guess_a, guess_b, guess_c = 4000, -0.005, 100 >> > guess = [guess_a, guess_b, guess_c] >> > >> > f_theory1 = lambda t, a, b, c: a * np.exp(b * t) + c >> > f_theory2 = lambda t, a, b: np.exp(a * t) + b >> > >> > pl.plot(x, y, 'b.', smoothx, f_theory1(smoothx, guess_a, guess_b, >> > guess_c)) >> > pl.show() >> > >> > p, cov = curve_fit(f_theory1, x, y) >> > #p, cov = curve_fit(f_theory2, x, y) >> > >> > # the following gives: >> > # ? ValueError: shape mismatch: objects cannot be broadcast to a >> > single shape >> > #p, cov = curve_fit(f_theory1, x, y, p0=guess) >> > >> > pl.clf() >> > f_fit1 = lambda t: p[0] * np.exp(p[1] * t) + p[2] >> > #f_fit2 = lambda t: np.exp(p[0] * t) + p[1] >> > pl.plot(x, y, 'b.', smoothx, f_fit1(smoothx), 'k-') >> > pl.show() >> > >> > ## >> > ## EOF >> > ## >> > >> > As you can see, I have tried to fit using 2 or 3 parameters with no >> > luck. Is there something that I could do to make this work? I have >> > tried this exact thing in matlab and it worked the first time. >> > Unfortunately, I would really like to use python as I find it in >> > general more intuitive than matlab. >> > >> > Thanks, >> > ~Kris >> > >> > On Wed, Oct 7, 2009 at 9:40 AM, Bruce Southey > > > wrote: >> > >> > ? ? On Wed, Oct 7, 2009 at 1:19 AM, ?> > ? ? > wrote: >> > ? ? > On Wed, Oct 7, 2009 at 1:36 AM, Kris Maynard > > ? ? > wrote: >> > ? ? >> I am having trouble with fitting data to an exponential curve. >> > ? ? I have an x-y >> > ? ? >> data series that I would like to fit to an exponential using >> > ? ? least squares >> > ? ? >> and have access to the covariance matrix of the result. I >> > ? ? summarize my >> > ? ? >> problem in the following example: >> > ? ? >> >> > ? ? >> import numpy as np >> > ? ? >> import scipy as sp >> > ? ? >> from scipy.optimize.minpack import curve_fit >> > ? ? >> >> > ? ? >> A, B = 5, 0.5 >> > ? ? >> x = np.linspace(0, 5, 10) >> > ? ? >> real_f = lambda x: A * np.exp(-1.0 * B * x) >> > ? ? >> y = real_f(x) >> > ? ? >> ynoisy = y + 0.01 * np.random.randn(len(x)) >> > ? ? >> >> > ? ? >> exp_f = lambda x, a, b: a * np.exp(-1.0 * b * x) >> > ? ? >> >> > ? ? >> # this line raises the error: >> > ? ? >> >> > ? ? >> # ?RuntimeError: Optimal parameters not found: Both >> > ? ? >> >> > ? ? >> # ?actual and predicted relative reductions in the sum of squares >> > ? ? >> >> > ? ? >> # ?are at most 0.000000 and the relative error between two >> > ? ? >> >> > ? ? >> # ?consecutive iterates is at most 0.000000 >> > ? ? >> >> > ? ? >> params, cov = curve_fit(exp_f, x, ynoisy) >> > ? ? > >> > >> > ? ? Could you please first plot your data? >> > ? ? As you would see, the curve is very poorly defined with those model >> > ? ? parameters and range. So you are asking a lot from your model and >> > ? ? data. At least you need a wider range with those parameters or Josef >> > ? ? says different parameter(s): >> > >> > ? ? > this might be the same as >> > ? ? ?http://projects.scipy.org/scipy/ticket/984 and >> > ? ? > http://mail.scipy.org/pipermail/scipy-user/2009-August/022090.html >> > ? ? > >> > ? ? > If I increase your noise standard deviation from 0.1 to 0.2 then >> > ? ? I do get >> > ? ? > correct estimation results in your example. >> > ? ? > >> > ? ? >> >> > ? ? >> I have tried to use the minpack.leastsq function directly with >> > ? ? similar >> > ? ? >> results. I also tried taking the log and fitting to a line with >> > ? ? no success. >> > ? ? >> The results are the same using scipy 0.7.1 as well as >> > ? ? 0.8.0.dev5953. Am I >> > ? ? >> not using the curve_fit function correctly? >> > ? ? > >> > ? ? > With ? minpack.leastsq ? error code 2 should be just a warning. >> > ? ? If you get >> > ? ? > incorrect parameter estimates with optimize.leastsq, besides the >> > ? ? warning, could >> > ? ? > you post the example so I can have a look. >> > ? ? > >> > ? ? > It looks like if you take logs then you would have a problem >> > ? ? that is linear in >> > ? ? > (transformed) parameters, where you could use linear least >> > ? ? squares if you >> > ? ? > just want a fit without the standard errors of the original >> > ? ? parameters >> > ? ? > (constant) >> > >> > ? ? The errors will be multiplicative rather than additive. >> > >> > ? ? Bruce >> > >> > ? ? > >> > ? ? > I hope that helps. >> > ? ? > >> > ? ? > Josef >> > ? ? > >> > ? ? > >> > ? ? >> Thanks, >> > ? ? >> ~Kris >> > ? ? >> -- >> > ? ? >> Heisenberg went for a drive and got stopped by a traffic cop. >> > ? ? The cop asked, >> > ? ? >> "Do you know how fast you were going?" Heisenberg replied, "No, >> > ? ? but I know >> > ? ? >> where I am." >> > ? ? >> >> > ? ? >> _______________________________________________ >> > ? ? >> SciPy-User mailing list >> > ? ? >> SciPy-User at scipy.org >> > ? ? >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > ? ? >> >> > ? ? >> >> > ? ? > _______________________________________________ >> > ? ? > SciPy-User mailing list >> > ? ? > SciPy-User at scipy.org >> > ? ? > http://mail.scipy.org/mailman/listinfo/scipy-user >> > ? ? > >> > >> > ? ? Hi, >> > ? ? _______________________________________________ >> > ? ? SciPy-User mailing list >> > ? ? SciPy-User at scipy.org >> > ? ? http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > >> > >> > >> > -- >> > Heisenberg went for a drive and got stopped by a traffic cop. The cop >> > asked, "Do you know how fast you were going?" Heisenberg replied, "No, >> > but I know where I am." >> > ------------------------------------------------------------------------ >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > Heisenberg went for a drive and got stopped by a traffic cop. The cop asked, > "Do you know how fast you were going?" Heisenberg replied, "No, but I know > where I am." > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > I haven't looked, but from your discussion, it looks like there is an y = np.asarray(y) missing in curve_fit ? Josef From dcswest at gmail.com Thu Oct 22 20:03:37 2009 From: dcswest at gmail.com (Dennis C) Date: Thu, 22 Oct 2009 17:03:37 -0700 Subject: [SciPy-User] maximization functions Message-ID: Hello again Robert; And thanks for that response! Hope it's not asking for too much "hand holding," since I don't quite understand it yet, but would you maybe offer an example of what and how exactly to call from scipy if I have a function to MAXimize that takes one variable (direct parabolic interpolation) say between 0 and 1 with a precision of 0.01 up to 100 iterations? Thanks again, Date: Thu, 22 Oct 2009 14:33:48 -0500 From: Robert Kern Subject: Re: [SciPy-User] maximization functions To: SciPy Users List Message-ID: <3d375d730910221233q57a1ee85r621fae1e53151c10 at mail.gmail.com> Content-Type: text/plain; charset=UTF-8 On Thu, Oct 22, 2009 at 14:07, Dennis C wrote: > Greetings; > Although I've been using numpy on a limited basis for a while now, just > recently came across scipy in a search for any "more standardized" python > optimization functions such as parabolic interpolation and a genetic > algorithm that might just work more reliably than what I've written up thus > far for either and seems like all I've found is the inverse of the former > and future plans for the latter, so basically am wondering if there's > already any maximization functions too in scipy as there seem to be > minimization ones? To convert a maximization problem to a minimization problem: def func_to_minimize(x): return -func_to_maximize(x) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Thu Oct 22 20:45:46 2009 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 22 Oct 2009 20:45:46 -0400 Subject: [SciPy-User] maximization functions In-Reply-To: References: Message-ID: <4AE0FCBA.5000300@american.edu> On 10/22/2009 8:03 PM, Dennis C wrote: > an example of what and how exactly to call from scipy if I have a > function to MAXimize that takes one variable (direct parabolic > interpolation) say between 0 and 1 with a precision of 0.01 up to 100 > iterations? >>> from scipy.optimize import fmin_bfgs >>> def f4min(x): return -myf(x) ... >>> fmin_bfgs(f4min, x0=1000, gtol=0.01, maxiter=100) Optimization terminated successfully. Current function value: 0.000000 Iterations: 4 Function evaluations: 21 Gradient evaluations: 7 array([ 0.00017235]) hth, Alan Isaac From aisaac at american.edu Thu Oct 22 20:50:28 2009 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 22 Oct 2009 20:50:28 -0400 Subject: [SciPy-User] maximization functions In-Reply-To: References: Message-ID: <4AE0FDD4.8010108@american.edu> Btw, you might want to look at http://openopt.org/Welcome Alan Isaac From bjrnfrdnnd at googlemail.com Fri Oct 23 06:01:33 2009 From: bjrnfrdnnd at googlemail.com (=?iso-8859-1?Q?Bj=F6rn_Nadrowski?=) Date: Fri, 23 Oct 2009 12:01:33 +0200 Subject: [SciPy-User] data fitting, constraints, covariance matrix References: Message-ID: Hello, is there a way of solving a problem with the following specifications? I would like to do classical data fitting, having a set of predictor observations , x_i, and a set of response observations, y_i. i goes from 1 to n; n would be the number of data points. For the sake of simplification, let us assume that a single predictor datapoint and a single response datapoint is a scalar value. I would then have a function f(p,x) that has a number of parameters p (let us say p is a vector of length q) Associated with x_i and y_i , I would have a set of experimentally measured errors, sigma_x_i and sigma_y_i. I would then like to perform an orthogonal distance reduction, minimizing the function \sum_{i=1}^n ( ( 1 / sigma_y_i)? * (f(p, x_i + delta_i) - y_i)? + (1/ sigma_x_i)? * delta_i? ) by varying the p_j and the delta_i, where j goes from 1 to q. This is classical orthogonal distance reduction with weighted input variables, and it is performed py scipy.odr, also for datapoints of multiple dimensions. Scipy.odr nicely calculates some local minimum, and also calculates the covariance matrix, and asymptotic standard errors. It also allows the user to fix some of the p_i during fitting, such that only a subset of the p_i is varied in order to find a local minimum. What scipy.odr does not provide, and I am looking for a program that does that, is: there are no constraints. There is no way to specify that the p_i should have boundaries, and there is no way to specify non-local constraints such as p_1*p_2 = p_3 In the openopt library, I have noticed that such constraints are routinely handled, and that there are also box constraints such as 0 From cohen at lpta.in2p3.fr Sat Oct 24 06:45:28 2009 From: cohen at lpta.in2p3.fr (Johann Cohen-Tanugi) Date: Sat, 24 Oct 2009 12:45:28 +0200 Subject: [SciPy-User] scipy.stats.fit inquiry In-Reply-To: References: <1cd32cbb0910200352k14dbdeb2nedad5a5c37b1bf90@mail.gmail.com> Message-ID: <4AE2DAC8.2010609@lpta.in2p3.fr> I think that in essence it is even simpler than that. It is a Poisson Likelihood objective function, reformatted so that its difference when changing models behaves like a chi2 in the limit of large enough counts. Note that it is perfectly fine for unbinned analysis, contrary to what could be inferred from the Sherpa discussion. Roughly speaking, because it behaves well at very small counts per bin, you can go without trouble to the limit of one or zero count per bin, which is actually unbinned.... We use it extensively in gamma-ray astrophysics, especially with the Fermi observatory (http://fermi.gsfc.nasa.gov/). best, Johann Anne Archibald wrote: > 2009/10/20 : > > >> I never heard about the Cash statistic. >> > > It's a clever trick for estimating uncertainties on fitted parameters; > you do some magic with the likelihood ratio and you get statistic that > behaves like chi-squared, apart from being exactly zero at your > best-fit value. So it's no use for esstimating quality-of-fit, but you > can use it to get error regions just the way you would if you'd had > Gaussian statistics and a chi-squared fit. (Cash 1979, "Parameter > estimation in astronomy through application of the likelihood ratio") > > Incidentally, I have some code implementing the Kuiper test, a > modified K-S test that is sensitive to different aspects of the shape > of the distribution, and (more importantly for me) is invariant on > shifting a distribution or sample modulo 1. I haven't submitted it for > inclusion because the interface I used is a little different from that > used by scipy's K-S test, but if there's interest I'd be happy to > contribute it. > > Anne > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From rpg.314 at gmail.com Sat Oct 24 09:31:52 2009 From: rpg.314 at gmail.com (Rohit Garg) Date: Sat, 24 Oct 2009 19:01:52 +0530 Subject: [SciPy-User] Poor scalability of embarrassingly parallel code with multiprocessing Message-ID: <4d5dd8c20910240631p57413e7ej6cbf9f54a046a91d@mail.gmail.com> Hi, I am attaching a very simple, embarrassingly parallel code. The problem is that it shows practically no speedup at all on my dual-core machine. 64 bit, python 2.6, ============== ~/Documents/lingo/python/multiprocessing at rpg> time python pool-test.py 3 3 done real 1m5.686s user 1m41.881s sys 0m2.846s ~/Documents/lingo/python/multiprocessing at rpg> time python pool-test.py 2 2 done real 1m6.052s user 1m45.550s sys 0m2.797s ~/Documents/lingo/python/multiprocessing at rpg> time python pool-test.py 1 1 done real 1m21.696s user 1m58.327s sys 0m2.426s ~/Documents/lingo/python/multiprocessing at rpg> time python pool-test.py 4 4 done real 1m3.422s user 1m39.742s sys 0m3.179s ======================== And the following tests were done on a quad core machine over ssh, (so network latencies *may* be part of the problem here). 32 bit, python 2.6, ============================= ~@pixel> time python pool-test.py 1 1 done real 0m58.772s user 1m24.549s sys 0m1.720s ~@pixel> time python pool-test.py 2 2 done real 0m44.396s user 1m18.525s sys 0m2.000s ~@pixel> time python pool-test.py 3 3 done real 0m45.898s user 1m21.293s sys 0m1.904s ~@pixel> time python pool-test.py 4 4 done real 0m44.196s user 1m19.561s sys 0m2.056s ~@pixel> time python pool-test.py 5 5 done real 0m47.451s user 1m24.997s sys 0m2.416s ~@pixel> time python pool-test.py 6 6 done real 0m49.181s user 1m25.881s sys 0m2.356s ============================== The code which delivered these results is here =============================== #test of Pool's map capablities from multiprocessing import Pool import numpy import sys procs=int(sys.argv[1]) print procs def f(x): index,probs=x return index,2.0*probs prob_samples=1000000 probX=numpy.linspace(0.2, 0.3, prob_samples) Input=[(i,probX[i]) for i in xrange(prob_samples) ] pool = Pool(processes=procs) pool.map(f, Input) print 'done' =================== Did I make a mistake somewhere? What have been your experiences with multiprocessing module in general and pool in particular? What approaches would you suggest to improve the speed scaling. Regards, -- Rohit Garg http://rpg-314.blogspot.com/ Senior Undergraduate Department of Physics Indian Institute of Technology Bombay -------------- next part -------------- A non-text attachment was scrubbed... Name: pool-test.py Type: text/x-python Size: 443 bytes Desc: not available URL: From pav at iki.fi Sat Oct 24 13:55:21 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 24 Oct 2009 20:55:21 +0300 Subject: [SciPy-User] Poor scalability of embarrassingly parallel code with multiprocessing In-Reply-To: <4d5dd8c20910240631p57413e7ej6cbf9f54a046a91d@mail.gmail.com> References: <4d5dd8c20910240631p57413e7ej6cbf9f54a046a91d@mail.gmail.com> Message-ID: <1256406920.6400.38.camel@idol> la, 2009-10-24 kello 19:01 +0530, Rohit Garg kirjoitti: [clip] > I am attaching a very simple, embarrassingly parallel code. The > problem is that it shows practically no speedup at all on my dual-core > machine. [clip] > =============================== > #test of Pool's map capablities > from multiprocessing import Pool > import numpy > import sys > > procs=int(sys.argv[1]) > print procs > def f(x): > index,probs=x > return index,2.0*probs > > prob_samples=1000000 > > probX=numpy.linspace(0.2, 0.3, prob_samples) > > Input=[(i,probX[i]) for i in xrange(prob_samples) ] > > pool = Pool(processes=procs) > > pool.map(f, Input) The scaling problem you have is that: 1) For each communication event, there is some overhead. You invoke `prob_samples` communication events, which incurs the communication overhead 1000000 times. This is where your code spends most of its time. 2) Your computational sub-problem is limited by memory bus speed: most of the time is taken by transfer of data between the main memory and CPU caches. In general, if you have this large amount of data per CPU, you can suppress overhead costs by communicating the samples in, say, 1000 element blocks. But because of 2), your problem is essentially unscalable. The CPU <-> main memory communication is a bottleneck that you cannot work around. Here's a better example function: def f(x): index,probs=x for k in xrange(1000): numpy.cos(probs, probs) return index, probs -- Pauli Virtanen From ckoers at telenet.be Sat Oct 24 17:58:57 2009 From: ckoers at telenet.be (Gaetan Cesar Koers) Date: Sat, 24 Oct 2009 23:58:57 +0200 Subject: [SciPy-User] How to make sure that a module gets re-loaded Message-ID: <4AE378A1.7060500@telenet.be> Hello, How to make sure that a module gets re-loaded when I've edited it's source code? I thought that either 'import' or 'reload' could do this, but I have the idea that only a cold shell restart helps. The time I've lost by chasing a bug that afterwards proves to be a remnant of a previous module version! 2nd, related perhaps: The code I'm working on, in module 'M.py', is located in a directory named 'M'. From the parent of directory 'M', when I do 'import M' or 'reload M', how can I make sure that Python imports/reloads 'M/M.py'? Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) shell = IPython 0.8.1 thanks in advance for your feedback bye C From ellisonbg.net at gmail.com Sat Oct 24 22:17:54 2009 From: ellisonbg.net at gmail.com (Brian Granger) Date: Sat, 24 Oct 2009 19:17:54 -0700 Subject: [SciPy-User] Poor scalability of embarrassingly parallel code with multiprocessing In-Reply-To: <4d5dd8c20910240631p57413e7ej6cbf9f54a046a91d@mail.gmail.com> References: <4d5dd8c20910240631p57413e7ej6cbf9f54a046a91d@mail.gmail.com> Message-ID: <6ce0ac130910241917j270dbce2r98162c2e4d8397c8@mail.gmail.com> I haven't looked at your algorithm, but using "time" to time parallel codes like this can give very unreliable results. I have seen many times people get prematurely excited or depressed about the parallel scaling of a program using time. So the first step would be to figure out a better way of timing your code. Brian On Sat, Oct 24, 2009 at 6:31 AM, Rohit Garg wrote: > Hi, > > I am attaching a very simple, embarrassingly parallel code. The > problem is that it shows practically no speedup at all on my dual-core > machine. > > 64 bit, python 2.6, > ============== > ~/Documents/lingo/python/multiprocessing at rpg> time python pool-test.py 3 > 3 > done > > real 1m5.686s > user 1m41.881s > sys 0m2.846s > ~/Documents/lingo/python/multiprocessing at rpg> time python pool-test.py 2 > 2 > done > > real 1m6.052s > user 1m45.550s > sys 0m2.797s > ~/Documents/lingo/python/multiprocessing at rpg> time python pool-test.py 1 > 1 > done > > real 1m21.696s > user 1m58.327s > sys 0m2.426s > ~/Documents/lingo/python/multiprocessing at rpg> time python pool-test.py 4 > 4 > done > > real 1m3.422s > user 1m39.742s > sys 0m3.179s > ======================== > > And the following tests were done on a quad core machine over ssh, (so > network latencies *may* be part of the problem here). > > 32 bit, python 2.6, > ============================= > ~@pixel> time python pool-test.py 1 > 1 > done > > real 0m58.772s > user 1m24.549s > sys 0m1.720s > ~@pixel> time python pool-test.py 2 > 2 > done > > real 0m44.396s > user 1m18.525s > sys 0m2.000s > ~@pixel> time python pool-test.py 3 > 3 > done > > real 0m45.898s > user 1m21.293s > sys 0m1.904s > ~@pixel> time python pool-test.py 4 > 4 > done > > real 0m44.196s > user 1m19.561s > sys 0m2.056s > ~@pixel> time python pool-test.py 5 > 5 > done > > real 0m47.451s > user 1m24.997s > sys 0m2.416s > ~@pixel> time python pool-test.py 6 > 6 > done > > real 0m49.181s > user 1m25.881s > sys 0m2.356s > ============================== > > The code which delivered these results is here > > > =============================== > #test of Pool's map capablities > from multiprocessing import Pool > import numpy > import sys > > procs=int(sys.argv[1]) > print procs > def f(x): > index,probs=x > return index,2.0*probs > > prob_samples=1000000 > > probX=numpy.linspace(0.2, 0.3, prob_samples) > > Input=[(i,probX[i]) for i in xrange(prob_samples) ] > > pool = Pool(processes=procs) > > pool.map(f, Input) > > print 'done' > =================== > > > Did I make a mistake somewhere? What have been your experiences with > multiprocessing module in general and pool in particular? What > approaches would you suggest to improve the speed scaling. > > Regards, > -- > Rohit Garg > > http://rpg-314.blogspot.com/ > > Senior Undergraduate > Department of Physics > Indian Institute of Technology > Bombay > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From opossumnano at gmail.com Sun Oct 25 04:38:37 2009 From: opossumnano at gmail.com (Tiziano Zito) Date: Sun, 25 Oct 2009 09:38:37 +0100 Subject: [SciPy-User] [ANN] Advanced Scientific Programming in Python Winter School in Warsaw, Poland Message-ID: Advanced Scientific Programming in Python a Winter School by the G-Node and University of Warsaw Scientists spend more and more time writing, maintaining, and debugging software. While techniques for doing this efficiently have evolved, only few scientists actually use them. As a result, instead of doing their research, they spend far too much time writing deficient code and reinventing the wheel. In this course we will present a selection of advanced programming techniques with theoretical lectures and practical exercises tailored to the needs of a programming scientist. New skills will be tested in a real programming project: we will team up to develop an entertaining scientific computer game. We'll use the Python programming language for the entire course. Python works as a simple programming language for beginners, but more importantly, it also works great in scientific simulations and data analysis. Clean language design and easy extensibility are driving Python to become a standard tool for scientific computing. Some of the most useful open source libraries for scientific computing and visualization will be presented. This winter school is targeted at Post-docs and PhD students from all areas. Substantial proficiency in Python or in another language (e.g. Java, C/C++, MATLAB, Mathematica) is absolutely required. An optional, one-day introduction to Python is offered to participants without prior experience with the language. Date and Location: February 8th ? 12th, 2010. Warsaw, Poland. Preliminary Program: - Day 0 (Mon Feb 8) ? [Optional] Dive into Python - Day 1 (Tue Feb 9) ? Software Carpentry ? Documenting code and using version control ? Test-driven development and unit testing ? Debugging, profiling and benchmarking techniques ? Object-oriented programming, design patterns, and agile programming - Day 2 (Wed Feb 10) ? Scientific Tools for Python ? NumPy, SciPy, Matplotlib ? Data serialization: from pickle to databases ? Programming project in the afternoon - Day 3 (Thu Feb 11) ? The Quest for Speed ? Writing parallel applications in Python ? When parallelization does not help: the starving CPUs problem ? Programming project in the afternoon - Day 4 (Fri Feb 12) ? Practical Software Development ? Software design ? Efficient programming in teams ? Quality Assurance ? Programming project final Applications: Applications should be sent before December 6th, 2009 to: python-winterschool at g-node.org No fee is charged but participants should take care of travel, living, and accommodation expenses. Applications should include full contact information (name, affiliation, email & phone), a *short* CV and a *short* statement addressing the following questions: ? What is your educational background? ? What experience do you have in programming? ? Why do you think ?Advanced Scientific Programming in Python? is an appropriate course for your skill profile? Candidates will be selected on the basis of their profile. Places are limited: early application is recommended. Notifications of acceptance will be sent by December 14th, 2009. Faculty ? Francesc Alted, author of PyTables, Castell? de la Plana, Spain [Day 3] ? Pietro Berkes, Volen Center for Complex Systems, Brandeis University, USA [Day 1] ? Zbigniew J?drzejewski-Szmek, Institute of Experimental Physics, University of Warsaw, Poland [Day 0] ? Eilif Muller, Laboratory of Computational Neuroscience, Ecole Polytechnique F?d?rale de Lausanne, Switzerland [Day 3] ? Bartosz Tele?czuk, Institute for Theoretical Biology, Humboldt-Universit?t zu Berlin, Germany [Day 2] ? Niko Wilbert, Institute for Theoretical Biology, Humboldt-Universit?t zu Berlin, Germany [Day 1] ? Tiziano Zito, Bernstein Center for Computational Neuroscience, Berlin, Germany [Day 4] Organized by Piotr Durka, Joanna and Zbigniew J?drzejewscy-Szmek (Institute of Experimental Physics, University of Warsaw), and Tiziano Zito (German Neuroinformatics Node of the INCF). Website: http://www.g-node.org/python-winterschool Contact: python-winterschool at g-node.org From gnurser at googlemail.com Sun Oct 25 04:47:22 2009 From: gnurser at googlemail.com (George Nurser) Date: Sun, 25 Oct 2009 08:47:22 +0000 Subject: [SciPy-User] How to make sure that a module gets re-loaded In-Reply-To: <4AE378A1.7060500@telenet.be> References: <4AE378A1.7060500@telenet.be> Message-ID: <1d1e6ea70910250147n36995f27p84ba51f0d318c330@mail.gmail.com> 2009/10/24 Gaetan Cesar Koers : > Hello, > > > How to make sure that a module gets re-loaded when I've edited it's > source code? > > I thought that either 'import' or 'reload' could do this, but I have the > idea that only a cold shell restart helps. > > The time I've lost by chasing a bug that afterwards proves to be a > remnant of a previous module version! reload will do this if your module is pure python. If it is written in C or FORTRAN you need to restart the ipython shell. > > > 2nd, related perhaps: > > The code I'm working on, in module 'M.py', is located in a directory > named 'M'. From the parent of directory 'M', when I do 'import M' or > 'reload M', how can I make sure that Python imports/reloads 'M/M.py'? There are at least three ways to do this. Below abspath-M is the full absolute path to directory M, and relpath-M the relative path 1. Add abspath-M to the PYTHONPATH environment variable 2. In ipython do import sys sys.path.append('abspath') OR sys.path.append('relpath') [so e.g. if you are running ipython from the parent of M, simply sys.path.append('M')] 3. Create a file named M.pth in a directory already in sys.path. This file should contain the line abspath ( or relpath). I put .pth files in my $HOME/lib/python2.5/site-packages directory. You can find out where your personal site-packages directory is by inspecting the path from python or ipython with print '\n'.join(sys.path) So e.g. if your modules are in /users/xxx/bigproject/M and you have a site-packages directory at /users/xxx/lib/python2.5/site-packages Create a file /users/xxx/lib/python2.5/site-packages/M with content (just one line) /users/xxx/bigproject/M or with content ../../../bigproject/M Method 3 is cleanest, but allows a python program running from *any* directory to access your modules in /users/xxx/bigproject/M, as does method 1. HTH. George Nurser. From pav at iki.fi Sun Oct 25 06:51:15 2009 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 25 Oct 2009 12:51:15 +0200 Subject: [SciPy-User] How to make sure that a module gets re-loaded In-Reply-To: <4AE378A1.7060500@telenet.be> References: <4AE378A1.7060500@telenet.be> Message-ID: <1256467874.6665.1.camel@idol> la, 2009-10-24 kello 23:58 +0200, Gaetan Cesar Koers kirjoitti: > How to make sure that a module gets re-loaded when I've edited it's > source code? > > I thought that either 'import' or 'reload' could do this, but I have the > idea that only a cold shell restart helps. > > The time I've lost by chasing a bug that afterwards proves to be a > remnant of a previous module version! If it's a pure-python module, for which reload() works, you can automate reloading by using IPython and doing import ipy_autoreload %autoreload 2 See also %autoreload? %aimport? -- Pauli Virtanen From ckoers at telenet.be Sun Oct 25 08:03:23 2009 From: ckoers at telenet.be (Cesar Koers) Date: Sun, 25 Oct 2009 13:03:23 +0100 Subject: [SciPy-User] How to make sure that a module gets re-loaded In-Reply-To: <1d1e6ea70910250147n36995f27p84ba51f0d318c330@mail.gmail.com> References: <4AE378A1.7060500@telenet.be> <1d1e6ea70910250147n36995f27p84ba51f0d318c330@mail.gmail.com> Message-ID: <4AE43E8B.3030109@telenet.be> George Nurser wrote: > 2009/10/24 Gaetan Cesar Koers : >> >> The code I'm working on, in module 'M.py', is located in a directory >> named 'M'. From the parent of directory 'M', when I do 'import M' or >> 'reload M', how can I make sure that Python imports/reloads 'M/M.py'? > > There are at least three ways to do this. Below abspath-M is the full > absolute path to directory M, and relpath-M the relative path > > 1. Add abspath-M to the PYTHONPATH environment variable > 2. In ipython do > import sys > sys.path.append('abspath') OR sys.path.append('relpath') > > [so e.g. if you are running ipython from the parent of M, simply > sys.path.append('M')] > > 3. Create a file named M.pth in a directory already in sys.path. This > file should contain the line > abspath ( or relpath). I put .pth files in my > $HOME/lib/python2.5/site-packages directory. > You can find out where your personal site-packages directory is by > inspecting the path from python or ipython with > print '\n'.join(sys.path) > > So e.g. if your modules are in /users/xxx/bigproject/M > and you have a site-packages directory at > /users/xxx/lib/python2.5/site-packages > Create a file /users/xxx/lib/python2.5/site-packages/M > > with content (just one line) > /users/xxx/bigproject/M > > or with content > ../../../bigproject/M > > Method 3 is cleanest, but allows a python program running from *any* > directory to access your modules in /users/xxx/bigproject/M, as does > method 1. Hi George, thanks for this explanation And why does adding a directory to sys.path affect the 'reload' behaviour of the modules in that directory? regards C From ckoers at telenet.be Sun Oct 25 07:56:45 2009 From: ckoers at telenet.be (Cesar Koers) Date: Sun, 25 Oct 2009 12:56:45 +0100 Subject: [SciPy-User] How to make sure that a module gets re-loaded In-Reply-To: <1256467874.6665.1.camel@idol> References: <4AE378A1.7060500@telenet.be> <1256467874.6665.1.camel@idol> Message-ID: <4AE43CFD.2090907@telenet.be> Pauli Virtanen wrote: > la, 2009-10-24 kello 23:58 +0200, Gaetan Cesar Koers kirjoitti: >> How to make sure that a module gets re-loaded when I've edited it's >> source code? >> >> I thought that either 'import' or 'reload' could do this, but I have the >> idea that only a cold shell restart helps. >> >> The time I've lost by chasing a bug that afterwards proves to be a >> remnant of a previous module version! > > If it's a pure-python module, for which reload() works, you can automate > reloading by using IPython and doing > > import ipy_autoreload > %autoreload 2 > > See also > > %autoreload? > %aimport? > Great, thanks for the tip! I've upgraded Ipython to 0.10 and will try this out From gnurser at googlemail.com Sun Oct 25 08:38:55 2009 From: gnurser at googlemail.com (George Nurser) Date: Sun, 25 Oct 2009 12:38:55 +0000 Subject: [SciPy-User] How to make sure that a module gets re-loaded In-Reply-To: <4AE43E8B.3030109@telenet.be> References: <4AE378A1.7060500@telenet.be> <1d1e6ea70910250147n36995f27p84ba51f0d318c330@mail.gmail.com> <4AE43E8B.3030109@telenet.be> Message-ID: <1d1e6ea70910250538k76ce2510g733eed9cb85616bc@mail.gmail.com> 2> Hi George, thanks for this explanation > > And why does adding a directory to sys.path affect the 'reload' > behaviour of the modules in that directory? > Because it means that python can find the modules. The pythonpath and .pth files methods also allow python to find modules in that directory. Otherwise, if ipython is run from a directory different to the one which contains the module, python will not in general find the modules. Whether you can reload the module after that depends on whether it's pure python. --George. From helloworld777 at hotmail.com Sun Oct 25 17:58:51 2009 From: helloworld777 at hotmail.com (hello world) Date: Sun, 25 Oct 2009 16:58:51 -0500 Subject: [SciPy-User] Newbie Question :: Natural Cubic Spline Message-ID: Hello, Thanks for the example. It is greatly appreciated. Would you have an example of a monotonic cubic spline? Thanks, Alex From: helloworld777 at hotmail.com To: scipy-user at scipy.org Subject: Newbie Question :: Natural Cubic Spline Date: Wed, 21 Oct 2009 15:48:19 -0500 Hello: I am new to Python and the SciPy libraries and I was wondering if someone could help me with the following. Given the following x and y : values x_list = [33, 56, 56.00000000002, 147, 238, 329, 420, 511, 602, 693, 791] y_list = [0.99974, 0.99949, 0.99949, 0.99816,0.99631,0.99383,0.99043,0.98610,0.98078,0.97460,0.96704] I want to use a Natural Cubic Spline in order to determine the points at the following values: interp_x_points = [31,62,92,123,153,184,215,243,274,304,335,365,396,427,457,488,518,549,580,608,639,669,700] I have looked through the documentation and have had a tough time determining the correct syntax or which function to use. I was wondering what would be the SciPy syntax for the following: Also, does scipy have the ability to extrapolate if a give X value is outside a specified range? In this example, please notice interp_x_poiints has a value (31) which is outside of the x_list range. ============================ my_spline_object = NaturalCubicSpline(x_list, y_list) for interp_x_value in interp_x_points: interp_y_value = my_spline_object(interp_x_value) # return a floating value print interp_y_value Many Thanks Hotmail: Free, trusted and rich email service. Get it now. _________________________________________________________________ Windows 7: It works the way you want. Learn more. http://www.microsoft.com/Windows/windows-7/default.aspx?ocid=PID24727::T:WLMTAGL:ON:WL:en-US:WWL_WIN_evergreen2:102009 -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at pythonxy.com Mon Oct 26 02:52:03 2009 From: contact at pythonxy.com (Pierre Raybaut) Date: Mon, 26 Oct 2009 07:52:03 +0100 Subject: [SciPy-User] [ANN] Python(x,y) 2.6.2.0 released Message-ID: <4AE54713.50803@pythonxy.com> Hi all, I'm pleased to announce that Python(x,y) v2.6.2.0 has been released. From a certain point of view, it's a step back because it's now based on Python 2.6.2 instead of Python 2.6.3. Actually it's a step forward because: * now the VPython plugin does work (apparently, a bugfix in Python 2.6.3+ has introduced an incompatiblity with current boost.python release -- a binding tool on which VPython is based -- in other words, let's stick to Python 2.6.2 for a while, shall we?), * Spyder was upgraded to v1.0.1 (with some important bugfixes, e.g. for user account names having unicode characters) * xy was upgraded to v1.1.0: o some bugfixes: e.g. 'startups' and 'logs' directories location compatible with Vista's UAC o introducing a new test script: xytest (just run 'xytest' in your command window to check some basic features of your Python(x,y) installation) * Eclipse/CDT: fixed C/C++ project configuration issue. Please note that future Updates will be available for Python(x,y) v2.6.2.0 only. In other words, if you have already installed v2.6.3.0, please uninstall it completely and then install v2.6.2.0 (of course this applies to v2.1.x as well). - Pierre From richard.w.pfeifer at googlemail.com Mon Oct 26 10:40:04 2009 From: richard.w.pfeifer at googlemail.com (Richard Pfeifer) Date: Mon, 26 Oct 2009 07:40:04 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Accessing calculated function values of scipy.interpolated.quad Message-ID: <26061012.post@talk.nabble.com> Hi Scipy-Users, I've got a question concerning scipy.integrate: I'm interessted in (1) the Integral F = Int(f, a, b) of the function f(x) in the range (a,b) AND (2) the values f(x_i) used for the integration (to plot f(x) in the same range). scipy.integrate.quad can do the first job very well but I did not find a way to get the values f(x_i). I could let scipy.integrate.quad do it's job and then evaluate f(x) again for the plotting but this would need twice the time... Is there any way to get the [x_i, f(x_i)] scipy.integrate.quad calculated for the integration? Thanks, Richard -- View this message in context: http://www.nabble.com/Accessing-calculated-function-values-of-scipy.interpolated.quad-tp26061012p26061012.html Sent from the Scipy-User mailing list archive at Nabble.com. From josef.pktd at gmail.com Mon Oct 26 11:51:57 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 26 Oct 2009 11:51:57 -0400 Subject: [SciPy-User] [SciPy-user] Accessing calculated function values of scipy.interpolated.quad In-Reply-To: <26061012.post@talk.nabble.com> References: <26061012.post@talk.nabble.com> Message-ID: <1cd32cbb0910260851y51d222dcv112eda68ce6e4f21@mail.gmail.com> On Mon, Oct 26, 2009 at 10:40 AM, Richard Pfeifer wrote: > > Hi Scipy-Users, > > I've got a question concerning scipy.integrate: > I'm interessted in > ? (1) the Integral F = Int(f, a, b) of the function f(x) in the range (a,b) > AND > ? (2) the values f(x_i) used for the integration (to plot f(x) in the same > range). > scipy.integrate.quad can do the first job very well but I did not find a way > to get the values f(x_i). I could let scipy.integrate.quad do it's job and > then evaluate f(x) again for the plotting but this would need twice the > time... > > Is there any way to get the [x_i, f(x_i)] scipy.integrate.quad calculated > for the integration? I think the answer is no. Looking at the source of scipy.integrate.quad, the python function returns all the return values of the fortran routine. So the actual points of evaluation seem to be temp variables inside the fortran program. Josef > Thanks, > Richard > -- > View this message in context: http://www.nabble.com/Accessing-calculated-function-values-of-scipy.interpolated.quad-tp26061012p26061012.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pav+sp at iki.fi Mon Oct 26 13:01:22 2009 From: pav+sp at iki.fi (Pauli Virtanen) Date: Mon, 26 Oct 2009 17:01:22 +0000 (UTC) Subject: [SciPy-User] [SciPy-user] Accessing calculated function values of scipy.interpolated.quad References: <26061012.post@talk.nabble.com> <1cd32cbb0910260851y51d222dcv112eda68ce6e4f21@mail.gmail.com> Message-ID: Mon, 26 Oct 2009 11:51:57 -0400, josef.pktd wrote: > On Mon, Oct 26, 2009 at 10:40 AM, Richard Pfeifer > wrote: >> >> Hi Scipy-Users, >> >> I've got a question concerning scipy.integrate: I'm interessted in >> ? (1) the Integral F = Int(f, a, b) of the function f(x) in the range >> ? (a,b) >> AND >> ? (2) the values f(x_i) used for the integration (to plot f(x) in the >> ? same >> range). >> scipy.integrate.quad can do the first job very well but I did not find >> a way to get the values f(x_i). I could let scipy.integrate.quad do >> it's job and then evaluate f(x) again for the plotting but this would >> need twice the time... >> >> Is there any way to get the [x_i, f(x_i)] scipy.integrate.quad >> calculated for the integration? > > I think the answer is no. > Looking at the source of scipy.integrate.quad, the python function > returns all the return values of the fortran routine. So the actual > points of evaluation seem to be temp variables inside the fortran > program. There's no pre-made solution, but you can cook one yourself: def func(x): return ...long computation... values = [] def wrapfunc(x): y = func(x) values.append((x, y)) return y ig = integrate.quad(wrapfunc, 0, 1) values = np.array(values) values = values[np.argsort(values[:,0])] -- Pauli Virtanen From c-b at asu.edu Mon Oct 26 13:39:43 2009 From: c-b at asu.edu (Christopher Brown) Date: Mon, 26 Oct 2009 10:39:43 -0700 Subject: [SciPy-User] Audiolab on Py2.6 Message-ID: <4AE5DEDF.7070701@asu.edu> Hi List, Has anyone gotten scikits.audiolab working with python 2.6? Here is the error I get on a clean Python 2.6 install with numpy and audiolab installed (using the audiolab 0.10.2 installer for py2.6 I downloaded from pypi, and a clean Win XPSP3 install): >>> from scikits import audiolab Traceback (most recent call last): File "C:\Python26\lib\site-packages\scikits\audiolab\__init__.py", line 25, in from pysndfile import formatinfo, sndfile File "C:\Python26\lib\site-packages\scikits\audiolab\pysndfile\__init__.py", line 1, in from _sndfile import Sndfile, Format, available_file_formats, available_encodings ImportError: DLL load failed: The specified procedure could not be found. Any ideas? Everything works fine on py2.5. -- Chris From silva at lma.cnrs-mrs.fr Mon Oct 26 15:00:25 2009 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Mon, 26 Oct 2009 20:00:25 +0100 Subject: [SciPy-User] Audiolab on Py2.6 In-Reply-To: <4AE5DEDF.7070701@asu.edu> References: <4AE5DEDF.7070701@asu.edu> Message-ID: <1256583625.26422.3.camel@localhost.localdomain> Le lundi 26 octobre 2009 ? 10:39 -0700, Christopher Brown a ?crit : >>> from scikits import audiolab Traceback (most recent call last): File "C:\Python26\lib\site-packages\scikits\audiolab\__init__.py", line 25, in from pysndfile import formatinfo, sndfile File "C:\Python26\lib\site-packages\scikits\audiolab\pysndfile\__init__.py", line 1, in from _sndfile import Sndfile, Format, available_file_formats, available_encodings ImportError: DLL load failed: The specified procedure could not be found. Make sure you have sndfile installed http://www.mega-nerd.com/libsndfile/ -- Fabrice Silva Laboratory of Mechanics and Acoustics - CNRS 31 chemin Joseph Aiguier, 13402 Marseille, France. From c-b at asu.edu Mon Oct 26 14:04:22 2009 From: c-b at asu.edu (Christopher Brown) Date: Mon, 26 Oct 2009 11:04:22 -0700 Subject: [SciPy-User] Audiolab on Py2.6 In-Reply-To: <1256583625.26422.3.camel@localhost.localdomain> References: <4AE5DEDF.7070701@asu.edu> <1256583625.26422.3.camel@localhost.localdomain> Message-ID: <4AE5E4A6.3000904@asu.edu> Thanks for the suggestion. However, audiolab didn't need it installed on Python 2.5. I also see the file _sndfile.dll in the audiolab folder, which I assume contains the sndfile code (it is ~3.5mb). I installed it anyway, and I copied the dll into the audiolab folder, but the error persists. Any other suggestions? On 10/26/2009 12:00 PM, Fabrice Silva wrote: > Le lundi 26 octobre 2009 ? 10:39 -0700, Christopher Brown a ?crit : > >>>> from scikits import audiolab >>>> > Traceback (most recent call last): > File "C:\Python26\lib\site-packages\scikits\audiolab\__init__.py", line 25, in > from pysndfile import formatinfo, sndfile > File "C:\Python26\lib\site-packages\scikits\audiolab\pysndfile\__init__.py", line 1, in > from _sndfile import Sndfile, Format, available_file_formats, available_encodings > ImportError: DLL load failed: The specified procedure could not be found. > > Make sure you have sndfile installed > http://www.mega-nerd.com/libsndfile/ > > > From ijstokes at crystal.harvard.edu Mon Oct 26 18:08:42 2009 From: ijstokes at crystal.harvard.edu (Ian Stokes-Rees) Date: Mon, 26 Oct 2009 18:08:42 -0400 Subject: [SciPy-User] Extracting float vector from tuple vector Message-ID: <4AE61DEA.2060700@crystal.harvard.edu> Take 3 at sending this: > I have a vector that is defined as follows: > > dtype = [("score", "f4"), ("rfac", "f4"), ("codefull", "a10"), > ("code2", "a2"), ("subset","a4"), ("source","a10")] > results = np.zeros((entry_count,), dtype=dtype) > > What I'd love to be able to do is to refer to a "slice" taken from a > selection of rows, and only one tuple entry, e.g.: > > results[:]["score"] > > would return a vector of "f4" floats only, from the first entry of > every tuple in results. > > I can't figure out how to do this! > > Any suggestions gratefully received. From robert.kern at gmail.com Mon Oct 26 18:14:29 2009 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 26 Oct 2009 17:14:29 -0500 Subject: [SciPy-User] Extracting float vector from tuple vector In-Reply-To: <4AE61DEA.2060700@crystal.harvard.edu> References: <4AE61DEA.2060700@crystal.harvard.edu> Message-ID: <3d375d730910261514j1804ebd7hd6058c835f3c1f4a@mail.gmail.com> On Mon, Oct 26, 2009 at 17:08, Ian Stokes-Rees wrote: > Take 3 at sending this: > >> I have a vector that is defined as follows: >> >> ? ?dtype ? = [("score", "f4"), ("rfac", "f4"), ("codefull", "a10"), >> ("code2", "a2"), ("subset","a4"), ("source","a10")] >> ? ?results = np.zeros((entry_count,), dtype=dtype) >> >> What I'd love to be able to do is to refer to a "slice" taken from a >> selection of rows, and only one tuple entry, e.g.: >> >> results[:]["score"] >> >> would return a vector of "f4" floats only, from the first entry of >> every tuple in results. >> >> I can't figure out how to do this! You just did it. Although "results['score']" amounts to basically the same thing with less fuss. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jkington at wisc.edu Mon Oct 26 18:15:24 2009 From: jkington at wisc.edu (Joe Kington) Date: Mon, 26 Oct 2009 17:15:24 -0500 Subject: [SciPy-User] Extracting float vector from tuple vector In-Reply-To: <4AE61DEA.2060700@crystal.harvard.edu> References: <4AE61DEA.2060700@crystal.harvard.edu> Message-ID: Maybe I'm confused here, but doesn't "results['score']" give you what you need? On Mon, Oct 26, 2009 at 5:08 PM, Ian Stokes-Rees < ijstokes at crystal.harvard.edu> wrote: > Take 3 at sending this: > > > I have a vector that is defined as follows: > > > > dtype = [("score", "f4"), ("rfac", "f4"), ("codefull", "a10"), > > ("code2", "a2"), ("subset","a4"), ("source","a10")] > > results = np.zeros((entry_count,), dtype=dtype) > > > > What I'd love to be able to do is to refer to a "slice" taken from a > > selection of rows, and only one tuple entry, e.g.: > > > > results[:]["score"] > > > > would return a vector of "f4" floats only, from the first entry of > > every tuple in results. > > > > I can't figure out how to do this! > > > > Any suggestions gratefully received. > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Oct 26 18:19:35 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 26 Oct 2009 18:19:35 -0400 Subject: [SciPy-User] Extracting float vector from tuple vector In-Reply-To: References: <4AE61DEA.2060700@crystal.harvard.edu> Message-ID: <1cd32cbb0910261519y6313dc97t2fc4c8d389650312@mail.gmail.com> On Mon, Oct 26, 2009 at 6:15 PM, Joe Kington wrote: > Maybe I'm confused here, but doesn't?"results['score']"?give you what you > need? > On Mon, Oct 26, 2009 at 5:08 PM, Ian Stokes-Rees > wrote: >> >> Take 3 at sending this: >> >> > I have a vector that is defined as follows: >> > >> > ? ?dtype ? = [("score", "f4"), ("rfac", "f4"), ("codefull", "a10"), >> > ("code2", "a2"), ("subset","a4"), ("source","a10")] >> > ? ?results = np.zeros((entry_count,), dtype=dtype) >> > >> > What I'd love to be able to do is to refer to a "slice" taken from a >> > selection of rows, and only one tuple entry, e.g.: >> > >> > results[:]["score"] >> > >> > would return a vector of "f4" floats only, from the first entry of >> > every tuple in results. >> > >> > I can't figure out how to do this! >> > >> > Any suggestions gratefully received. >> >> >>> results[2:5]["score"] array([ 2., 3., 4.], dtype=float32) >>> results["score"][2:5] array([ 2., 3., 4.], dtype=float32) Josef >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From ralf.gommers at googlemail.com Tue Oct 27 06:52:51 2009 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 27 Oct 2009 11:52:51 +0100 Subject: [SciPy-User] scipy trunk build / segfault problem Message-ID: Hi, At the moment I seem to be unable to build SciPy. A few weeks ago on the same machine it worked fine. I collected as much info as I could below. OS: OS X 10.6, Snow Leopard gcc: $ gcc --version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5646) $ gfortran --version GNU Fortran (GCC) 4.2.3 (from http://r.research.att.com/tools/) $ python --version Python 2.6.1 (default OS X Python) Before each build I remove old files with: $ rm -rf build/ $ sudo rm /Library/Python/2.6/site-packages/numpy.egg-link (and same for SciPy) Then I build NumPy (r7593) in-place with: $ LDFLAGS="-lgfortran -arch x86_64" FFLAGS="-arch x86_64" $ NPY_SEPARATE_COMPILATION=1 python setupscons.py scons -i $ python setupegg.py develop (from http://projects.scipy.org/numpy/wiki/BuildWithNumScons , also rebuilt once used default setup.py, made no difference) Now the in-place build of SciPy (r6053) with: $ LDFLAGS="-lgfortran -arch x86_64" FFLAGS="-arch x86_64" $ NPY_SEPARATE_COMPILATION=1 python setupscons.py scons -i fails with message: http://pastebin.com/m62fe52de Building with: $ LDFLAGS="-lgfortran -arch x86_64" FFLAGS="-arch x86_64" $ python setup.py build_ext -i $ sudo python setupegg.py develop seems to work (build output: http://pastebin.com/d31a920fc) but then segfaults with the message "Python quit unexpectedly while using the multiarray.so plug-in": >>> import scipy >>> import scipy.linalg Segmentation fault Tried getting a traceback with gdb, but I think that I need to rebuild Python itself for that to be useful?: (gdb) run scipyimport.py Starting program: /usr/bin/python scipyimport.py Reading symbols for shared libraries .++..... done Program received signal SIGTRAP, Trace/breakpoint trap. 0x00007fff5fc01028 in __dyld__dyld_start () (gdb) bt #0 0x00007fff5fc01028 in __dyld__dyld_start () #1 0x0000000100000000 in ?? () (gdb) #0 0x00007fff5fc01028 in __dyld__dyld_start () #1 0x0000000100000000 in ?? () Any pointers would be highly appreciated! Thanks, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Tue Oct 27 07:01:13 2009 From: cournape at gmail.com (David Cournapeau) Date: Tue, 27 Oct 2009 20:01:13 +0900 Subject: [SciPy-User] scipy trunk build / segfault problem In-Reply-To: References: Message-ID: <5b8d13220910270401l646ccb04t424475565e1ad073@mail.gmail.com> On Tue, Oct 27, 2009 at 7:52 PM, Ralf Gommers wrote: > Hi, > > At the moment I seem to be unable to build SciPy. A few weeks ago on the > same machine it worked fine. I collected as much info as I could below. > > I got the same problem yesterday on Snow Leopard - it was late and I did not investigate much, as I thought I was too tired to not miss something obvious. Please open a ticket on trac, I will try to look into it tonight, David From ralf.gommers at googlemail.com Tue Oct 27 07:14:37 2009 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 27 Oct 2009 12:14:37 +0100 Subject: [SciPy-User] scipy trunk build / segfault problem In-Reply-To: <5b8d13220910270401l646ccb04t424475565e1ad073@mail.gmail.com> References: <5b8d13220910270401l646ccb04t424475565e1ad073@mail.gmail.com> Message-ID: On Tue, Oct 27, 2009 at 12:01 PM, David Cournapeau wrote: > On Tue, Oct 27, 2009 at 7:52 PM, Ralf Gommers > wrote: > > Hi, > > > > At the moment I seem to be unable to build SciPy. A few weeks ago on the > > same machine it worked fine. I collected as much info as I could below. > > > > > > I got the same problem yesterday on Snow Leopard - it was late and I > did not investigate much, as I thought I was too tired to not miss > something obvious. > > Please open a ticket on trac, I will try to look into it tonight, > > Thanks, done. http://projects.scipy.org/scipy/ticket/1038 Ralf > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From all6junk at gmail.com Tue Oct 27 12:32:03 2009 From: all6junk at gmail.com (Chaitanya Krishna Ande) Date: Tue, 27 Oct 2009 17:32:03 +0100 Subject: [SciPy-User] Permission bits on download from SF Message-ID: Hi, The permission bits on the unarchived directory downloaded from SourceForge is not set properly. This is what I get on my linux. It's just a minor annoyance albeit fixable. Cheers, Chaitanya chaitanya at drona:~/tools> ls -ld scikits.timeseries-0.91.2/ drw-r--r-- 4 chaitanya users 4096 2009-08-23 18:45 scikits.timeseries-0.91.2/ chaitanya at drona:~/tools> ls -ltr scikits.timeseries-0.91.2/ ls: cannot access scikits.timeseries-0.91.2/PKG-INFO: Permission denied ls: cannot access scikits.timeseries-0.91.2/setup.py: Permission denied ls: cannot access scikits.timeseries-0.91.2/MANIFEST.in: Permission denied ls: cannot access scikits.timeseries-0.91.2/setup.cfg: Permission denied ls: cannot access scikits.timeseries-0.91.2/scikits.timeseries.egg-info: Permission denied ls: cannot access scikits.timeseries-0.91.2/README.txt: Permission denied ls: cannot access scikits.timeseries-0.91.2/LICENSE.txt: Permission denied ls: cannot access scikits.timeseries-0.91.2/scikits: Permission denied total 0 -????????? ? ? ? ? ? setup.py -????????? ? ? ? ? ? setup.cfg d????????? ? ? ? ? ? scikits.timeseries.egg-info d????????? ? ? ? ? ? scikits -????????? ? ? ? ? ? README.txt -????????? ? ? ? ? ? PKG-INFO -????????? ? ? ? ? ? MANIFEST.in -????????? ? ? ? ? ? LICENSE.txt From pgmdevlist at gmail.com Tue Oct 27 13:13:27 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 27 Oct 2009 13:13:27 -0400 Subject: [SciPy-User] scipy trunk build / segfault problem In-Reply-To: References: Message-ID: <6149041F-FDEF-4BE2-B1FF-10ECF6F8A1CE@gmail.com> On Oct 27, 2009, at 6:52 AM, Ralf Gommers wrote: ... > Before each build I remove old files with: > $ rm -rf build/ > $ sudo rm /Library/Python/2.6/site-packages/numpy.egg-link > (and same for SciPy) > > > Then I build NumPy (r7593) in-place with: Ralf, When you build in-place, you should try to remove old *.so from your numpy directory before rebuilding. That way, you're sure that the latest libraries are recompiled and you're not mixing old and new. Not sure whether it's appropriate in your case, though... Cheers P. From c-b at asu.edu Tue Oct 27 13:28:56 2009 From: c-b at asu.edu (Christopher Brown) Date: Tue, 27 Oct 2009 10:28:56 -0700 Subject: [SciPy-User] Audiolab on Py2.6 In-Reply-To: <4AE5E4A6.3000904@asu.edu> References: <4AE5DEDF.7070701@asu.edu> <1256583625.26422.3.camel@localhost.localdomain> <4AE5E4A6.3000904@asu.edu> Message-ID: <4AE72DD8.4060508@asu.edu> On 10/26/2009 11:04 AM, Christopher Brown wrote: > Thanks for the suggestion. However, audiolab didn't need it installed on > Python 2.5. I also see the file _sndfile.dll in the audiolab folder, > which I assume contains the sndfile code (it is ~3.5mb). > > I installed it anyway, and I copied the dll into the audiolab folder, > but the error persists. Any other suggestions? > > Can anyone confirm that they got audiolab working with Python 2.6? I've tried several machines and get the same error on all of them. From Jim.Vickroy at noaa.gov Tue Oct 27 14:06:13 2009 From: Jim.Vickroy at noaa.gov (Jim Vickroy) Date: Tue, 27 Oct 2009 12:06:13 -0600 Subject: [SciPy-User] Audiolab on Py2.6 In-Reply-To: <4AE72DD8.4060508@asu.edu> References: <4AE5DEDF.7070701@asu.edu> <1256583625.26422.3.camel@localhost.localdomain> <4AE5E4A6.3000904@asu.edu> <4AE72DD8.4060508@asu.edu> Message-ID: <4AE73695.2040405@noaa.gov> Christopher Brown wrote: > On 10/26/2009 11:04 AM, Christopher Brown wrote: > >> Thanks for the suggestion. However, audiolab didn't need it installed on >> Python 2.5. I also see the file _sndfile.dll in the audiolab folder, >> which I assume contains the sndfile code (it is ~3.5mb). >> >> I installed it anyway, and I copied the dll into the audiolab folder, >> but the error persists. Any other suggestions? >> >> >> > Can anyone confirm that they got audiolab working with Python 2.6? I've > tried several machines and get the same error on all of them. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > I just installed it on a fully-patched Windows XP Pro machine using Python 2.6.2 and also got the same, previously-reported error when attempting to import it, but I have no familiarity with this package. Could this be related to the switch in C compilers between Python 2.5 and 2.6? -- jv -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Tue Oct 27 15:14:24 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 27 Oct 2009 15:14:24 -0400 Subject: [SciPy-User] least-square filter design Message-ID: Anyone have code for least square (minimum mean square error) FIR filter design? From cournape at gmail.com Tue Oct 27 20:48:40 2009 From: cournape at gmail.com (David Cournapeau) Date: Wed, 28 Oct 2009 09:48:40 +0900 Subject: [SciPy-User] Audiolab on Py2.6 In-Reply-To: <4AE5E4A6.3000904@asu.edu> References: <4AE5DEDF.7070701@asu.edu> <1256583625.26422.3.camel@localhost.localdomain> <4AE5E4A6.3000904@asu.edu> Message-ID: <5b8d13220910271748x7db74eecx8bd7b60849476cd@mail.gmail.com> On Tue, Oct 27, 2009 at 3:04 AM, Christopher Brown wrote: > Thanks for the suggestion. However, audiolab didn't need it installed on > Python 2.5. I also see the file _sndfile.dll in the audiolab folder, > which I assume contains the sndfile code (it is ~3.5mb). > > I installed it anyway, and I copied the dll into the audiolab folder, > but the error persists. Any other suggestions? There is indeed a problem on 2.6, but I have not found the time to look at it. Most likely linked to the manifest nonsense on windows, David From ralf.gommers at googlemail.com Wed Oct 28 06:35:19 2009 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 28 Oct 2009 11:35:19 +0100 Subject: [SciPy-User] scipy trunk build / segfault problem In-Reply-To: <6149041F-FDEF-4BE2-B1FF-10ECF6F8A1CE@gmail.com> References: <6149041F-FDEF-4BE2-B1FF-10ECF6F8A1CE@gmail.com> Message-ID: On Tue, Oct 27, 2009 at 6:13 PM, Pierre GM wrote: > > On Oct 27, 2009, at 6:52 AM, Ralf Gommers wrote: > > ... > > Before each build I remove old files with: > > $ rm -rf build/ > > $ sudo rm /Library/Python/2.6/site-packages/numpy.egg-link > > (and same for SciPy) > > > > > > Then I build NumPy (r7593) in-place with: > > Ralf, > When you build in-place, you should try to remove old *.so from your > numpy directory before rebuilding. That way, you're sure that the > latest libraries are recompiled and you're not mixing old and new. > Not sure whether it's appropriate in your case, though... > Thanks Pierre and Stefan. I did not realize removing the build dir was not enough. Removing, as Stefan suggested, with $ git clean -xdf helped in that I can now build both NumPy and SciPy with numscons, with: $ export LDFLAGS="-lgfortran -arch x86_64" $ export FFLAGS="-arch x86_64" $ NPY_SEPARATE_COMPILATION=1 python setupscons.py scons -i There is still a segfault occurring, but I can import and test modules separately. The only module that segfaults is "cluster". All other modules are fine, with the exception of 5 unit tests failing. I pasted those below. This does look like an actual problem with cluster. And David was seeing this as well, so I will reopen the ticket. Cheers, Ralf In addition there are a few test failures: SCIPY.SPARSE ====================================================================== ERROR: test_complex_nonsymmetric_modes (test_arpack.TestEigenComplexNonSymmetric) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/rgommers/Code/scipy/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 264, in test_complex_nonsymmetric_modes self.eval_evec(m,typ,k,which) File "/Users/rgommers/Code/scipy/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 245, in eval_evec eval,evec=eigen(a,k,which=which) File "/Users/rgommers/Code/scipy/scipy/sparse/linalg/eigen/arpack/arpack.py", line 220, in eigen raise RuntimeError("Error info=%d in arpack"%info) RuntimeError: Error info=-8 in arpack ====================================================================== ERROR: test_nonsymmetric_modes (test_arpack.TestEigenNonSymmetric) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/rgommers/Code/scipy/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 201, in test_nonsymmetric_modes self.eval_evec(m,typ,k,which) File "/Users/rgommers/Code/scipy/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 183, in eval_evec eval,evec=eigen(a,k,which=which,**kwds) File "/Users/rgommers/Code/scipy/scipy/sparse/linalg/eigen/arpack/arpack.py", line 220, in eigen raise RuntimeError("Error info=%d in arpack"%info) RuntimeError: Error info=-8 in arpack ====================================================================== ERROR: test_starting_vector (test_arpack.TestEigenNonSymmetric) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/rgommers/Code/scipy/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 211, in test_starting_vector self.eval_evec(self.symmetric[0],typ,k,which='LM',v0=v0) File "/Users/rgommers/Code/scipy/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 183, in eval_evec eval,evec=eigen(a,k,which=which,**kwds) File "/Users/rgommers/Code/scipy/scipy/sparse/linalg/eigen/arpack/arpack.py", line 220, in eigen raise RuntimeError("Error info=%d in arpack"%info) RuntimeError: Error info=-8 in arpack ====================================================================== FAIL: test_complex_symmetric_modes (test_arpack.TestEigenComplexSymmetric) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/rgommers/Code/scipy/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 153, in test_complex_symmetric_modes self.eval_evec(self.symmetric[0],typ,k,which) File "/Users/rgommers/Code/scipy/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 142, in eval_evec assert_array_almost_equal(eval,exact_eval,decimal=_ndigits[typ]) File "/Users/rgommers/Code/numpy/numpy/testing/utils.py", line 763, in assert_array_almost_equal header='Arrays are not almost equal') File "/Users/rgommers/Code/numpy/numpy/testing/utils.py", line 607, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 1.07188725 +6.23436023e-08j, 4.91291142 -3.25412906e-08j], dtype=complex64) y: array([ 5.+0.j, 6.+0.j], dtype=complex64) ---------------------------------------------------------------------- Ran 435 tests in 13.371s FAILED (SKIP=11, errors=3, failures=2) SCIPY.LINALG ====================================================================== ERROR: Check linalg works with non-aligned memory ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Python/2.6/site-packages/nose-0.11.1-py2.6.egg/nose/case.py", line 183, in runTest self.test(*self.arg) File "/Users/rgommers/Code/scipy/scipy/linalg/tests/test_decomp.py", line 1067, in test_aligned_mem eig(z.T, overwrite_a=True) File "/Users/rgommers/Code/scipy/scipy/linalg/decomp.py", line 158, in eig geev, = get_lapack_funcs(('geev',),(a1,)) File "/Users/rgommers/Code/scipy/scipy/linalg/lapack.py", line 82, in get_lapack_funcs raise ValueError("Non-aligned array cannot be passed to LAPACK without copying") ValueError: Non-aligned array cannot be passed to LAPACK without copying ---------------------------------------------------------------------- Ran 404 tests in 1.226s FAILED (SKIP=1, errors=1) SCIPY.FFTPACK ====================================================================== FAIL: test_random_real (test_basic.TestSingleIFFT) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/rgommers/Code/scipy/scipy/fftpack/tests/test_basic.py", line 205, in test_random_real assert_array_almost_equal (y1, x) File "/Users/rgommers/Code/numpy/numpy/testing/utils.py", line 763, in assert_array_almost_equal header='Arrays are not almost equal') File "/Users/rgommers/Code/numpy/numpy/testing/utils.py", line 607, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 0.900900900901%) x: array([ 0.610 -2.416e-09j, 0.644 +2.148e-09j, 0.982 +6.927e-08j, 0.394 -2.470e-09j, 0.453 -1.328e-08j, 0.555 +1.817e-08j, 0.928 +7.155e-10j, 0.277 -9.148e-09j, 0.047 -4.693e-08j, 0.677 -4.837e-09j, 0.922 +1.798e-08j, 0.128 -4.354e-08j,... y: array([ 0.61 , 0.644, 0.982, 0.394, 0.453, 0.555, 0.928, 0.277, 0.047, 0.677, 0.922, 0.128, 0.785, 0.218, 0.747, 0.68 , 0.792, 0.428, 0.986, 0.314, 0.341, 0.901, 0.244, 0.714, 0.817, 0.27 , 0.758, 0.788, 0.587, 0.937,... ---------------------------------------------------------------------- Ran 81 tests in 1.436s FAILED (failures=1) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mwojc at p.lodz.pl Wed Oct 28 07:58:05 2009 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Wed, 28 Oct 2009 12:58:05 +0100 Subject: [SciPy-User] [ANN] ffnet-0.6.2 released Message-ID: ffnet version 0.6.2 is released and is available for download at: http://ffnet.sourceforge.net This release contains minor enhancements and compatibility improvements: - ffnet works now with >=networkx-0.99; - neural network can be called now with 2D array of inputs, it also returns numpy array instead of python list; - readdata function is now alias to numpy.loadtxt; - docstrings are improved. What is ffnet? -------------- ffnet is a fast and easy-to-use feed-forward neural network training solution for python. Unique features --------------- 1. Any network connectivity without cycles is allowed. 2. Training can be performed with use of several optimization schemes including: standard backpropagation with momentum, rprop, conjugate gradient, bfgs, tnc, genetic alorithm based optimization. 3. There is access to exact partial derivatives of network outputs vs. its inputs. 4. Automatic normalization of data. Basic assumptions and limitations: ---------------------------------- 1. Network has feed-forward architecture. 2. Input units have identity activation function, all other units have sigmoid activation function. 3. Provided data are automatically normalized, both input and output, with a linear mapping to the range (0.15, 0.85). Each input and output is treated separately (i.e. linear map is unique for each input and output). 4. Function minimized during training is a sum of squared errors of each output for each training pattern. Performance ----------- Excellent computational performance is achieved implementing core functions in fortran 77 and wrapping them with f2py. ffnet outstands in performance pure python training packages and is competitive to 'compiled language' software. Moreover, a trained network can be exported to fortran sources, compiled and called in many programming languages. Usage ----- Basic usage of the package is outlined below: >>> from ffnet import ffnet, mlgraph, savenet, loadnet, exportnet >>> conec = mlgraph( (2,2,1) ) >>> net = ffnet(conec) >>> input = [ [0.,0.], [0.,1.], [1.,0.], [1.,1.] ] >>> target = [ [1.], [0.], [0.], [1.] ] >>> net.train_tnc(input, target, maxfun = 1000) >>> net.test(input, target, iprint = 2) >>> savenet(net, "xor.net") >>> exportnet(net, "xor.f") >>> net = loadnet("xor.net") >>> answer = net( [ 0., 0. ] ) >>> partial_derivatives = net.derivative( [ 0., 0. ] ) Usage examples with full description can be found in examples directory of the source distribution or browsed at http://ffnet.sourceforge.net. -- Marek Wojciechowski From pgmdevlist at gmail.com Wed Oct 28 07:25:11 2009 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 28 Oct 2009 07:25:11 -0400 Subject: [SciPy-User] scipy trunk build / segfault problem In-Reply-To: References: <6149041F-FDEF-4BE2-B1FF-10ECF6F8A1CE@gmail.com> Message-ID: On Oct 28, 2009, at 6:35 AM, Ralf Gommers wrote: > > Ralf, > When you build in-place, you should try to remove old *.so from your > numpy directory before rebuilding. That way, you're sure that the > latest libraries are recompiled and you're not mixing old and new. > Not sure whether it's appropriate in your case, though... > > Thanks Pierre and Stefan. I did not realize removing the build dir > was not enough. It is in most cases, but not when you're building *in-place*, of course... (I must admit that I got bitten by the very same thing last time...) From peridot.faceted at gmail.com Wed Oct 28 14:59:46 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 28 Oct 2009 14:59:46 -0400 Subject: [SciPy-User] Kuiper test (was Re: scipy.stats.fit inquiry) In-Reply-To: References: Message-ID: I put the Kuiper test (and come other non-uniformity tests) up on github: http://github.com/aarchiba/kuiper/ I'd like to clean up the tests somewhat (to reduce the code duplication between test_kuiper, test_htest, and test_zm2) before proposing any of it be included in scipy. Anne 2009/10/21 Anne Archibald : > 2009/10/21 : >> When the parameters of the t distribution >> are estimated, then the test has no power. >> I don't know if adjusting the critical values >> would help at least for comparing similar >> distributions like norm and t. >> (same problem with kstest) > > Yes, this test has its limitations. It's also not as sensitive as one > might wish for distinguishing multimodal distributions from a > constant. > >> To save on the lambda function, the cdf could take args given as >> an argument to kuiper. Maybe the specification of the cdf argument >> could be taken from kstest, but not (the generality of) the rvs argument >> in kstest. > > I loathe interfaces that pass args; I find they make the interface > more confusing while adding no functionality. But I realize that > they're standard and not going away (what about the currying > functionality in new pythons?), so I'll add it. > >> Your cdf_from_intervals looks nice for a univariate stepfunction distribution, >> There are several pieces for this on the mailing list, but I never got around >> to collecting them. > > This was just a quick hack because I needed it; I'm not sure it's > worth including in scipy.stats, since an appropriately general version > might be just as difficult to call as to reimplement. > >> Your interval functions, I haven't quite figured out. Is fold_intervals >> supposed to convert a set of overlapping intervals to a non-overlapping >> partitioning with associated weights? For example for merging >> 2 histograms with non-identical bounds? > > That among other things. The context was that I had observations of an > X-ray binary, consisting of several different segments with several > different instruments. So I'd have one observation covering orbital > phases 0.8 to 1.3 (i.e. 0.5 turns) with effective area 10 cm^2 (say), > and another covering phases 0.2 to 2.1 (i.e. 1.9 turns) with effective > area 12 cm^2 (say). So a constant flux from the source would produce a > non-constant distribution of photons modulo 1; fold_intervals is > designed to convert those two weighted spans to a collection of > non-overlapping weighted intervals covering [0,1), for use in a CDF > for the Kuiper test. The histogram stuff is there to allow plotting > the results in a familiar form. > >> I haven't figured out the shift invariance. > > The idea is just that if you have, say, a collection X of samples from > [0,1) that you are testing for uniformity, the Kuiper test returns the > same result for X and for (X+0.3)%1. Since for pulsars there is often > no natural start time, this is a sensible thing to ask from any test > for uniformity. > >>> I also have some code for the H test (essentially looking at the >>> Fourier coefficients to find how many harmonics are worth including >>> and what the significance is; de Jager et al. 1989 "A poweful test for >>> weak periodic signals with unknown light curve shape in sparse data"). >>> But circular statistics seem to be a bit obscure, so I'm not sure how >>> much effort should go into putting this in scipy. >> >> For sure they are obscure to me, but there are a few circular descriptive >> statistics in stats.morestats, and I saw a matlab toolbox on the >> file exchange. I figured out by now that there are some pretty different >> statistics used in various fields. >> I guess, it's all up to you. > > To be honest, these are a little obscure even among pulsar > astronomers; here as elsewhere histograms seem to dominate. > >> >From your description (below), I would think, that for circular >> distribution, we would need different generic functions that >> don't fit in the current distribution classes, integration on a >> circle (?) instead of integration on the real line. > > Basically, yes. Of course you can view integration on a circle as > integration on a line parameterizing the circle, but there's no way > around the fact that the first moment is a complex number which > contains both position (argument) and concentration (magnitude) > information. > >> If I define boundaries, vonmises.a and vonmises.b, then >> I think, you would not be able to calculate pdf(x), cdf(x) >> and so on for x outside of the support [a,b]. I don't know >> whether it is possible to define a,b but not enforce them in >> _argcheck and only use them as integration bounds. > > It's useful to be able to work with pdf and cdf outside the bounds, > particularly since the effect of loc is to shift the bounds - so any > code that takes loc as a free parameter would otherwise have to work > hard to avoid going outside the bounds. > >> I checked briefly, vonmises.moment and vonmises.stats >> work (integrating over ppf not pdf) >> generic moment calculation with pdf (_mom0_sc) and >> entropy fail. fit seems to work with the normal random >> variable, but I thought I got lots of "weird" fit results >> before. >> >> For the real line, this looks "strange" >>>>> stats.vonmises.pdf(np.linspace(0,3*np.pi,10), 2) >> array([ 0.51588541, 0.18978364, 0.02568442, 0.00944877, 0.02568442, >> 0.18978364, 0.51588541, 0.18978364, 0.02568442, 0.00944877]) >>>>> stats.vonmises.cdf(np.linspace(0,3*np.pi,10), 2) >> array([ 0.5 , 0.89890776, 0.98532805, 1. , >> 7.28318531, 7.28318531, 7.28318531, 7.28318531, >> 7.28318531, 13.56637061]) > > Unfortunately it seems that my attempt to fix the von Mises > distribution (using vonmises_cython) went badly awry, and now it > reports nonsense for values outside -pi..pi. I could have sworn I had > tests for that. I intend to fix this during the weekend's code sprint. > "Correct" behaviour (IMHO) would make the CDF the antiderivative of > the PDF, even though this means it leaves [0,1]. > > Incidentally, how would I get nosetests to run only some of the > distributions tests (ideally just the ones I choose)? When I just do > "nosetests scipy.stats" it takes *forever*. > >> ppf, calculated generically, works and looks only a little bit strange. >> >>>>> stats.vonmises.ppf([0, 1e-10,1-1e-10,1],2) >> array([ -Inf, -3.14159264, 3.14159264, Inf]) >> >>> >>> It's worth defining the boundaries, but I don't think you'll get >>> useful moment calculations out of it, since circular moments are >>> defined differently from linear moments: rather than int(x**n*pdf(x)) >>> they're int(exp(2,j*pi*n*x)*pdf(x)). There are explicit formulas for >>> these for the von Mises distribution, though, and they're useful >>> because they are essentially the Fourier coefficients of the pdf. >>> >>> For context, I've done some work with representing circular >>> probability distributions (= pulse profiles) in terms of their moments >>> (= Fourier series), and in particular using a kernel density estimator >>> with either a sinc kernel (= truncation) or a von Mises kernel (= >>> scaling coefficients by von Mises moments). The purpose was to get >>> not-too-biased estimates of the modulation amplitudes of X-ray >>> pulsations; the paper has been kind of sidetracked by other work. >> >> Sounds like statistics for seasonal time series to me, except you >> might have a lot more regularity in the repeated pattern then in >> economics or climate research. > > Well, yes and no. Many pulsars have a stable average pulse profile, > which is what we usually care about, especially where we're in the > regime of one photon every few tens of thousands of turns. On the > other hand, when dealing with orbital variability, I ran into a > statistical question I wasn't sure how to pose: if I add all the > (partial) orbits of data I have together, I get a distribution that is > pretty clearly nonconstant. But is that variability actually repeating > from orbit to orbit, or is it really just random variability? In a > seasonal context it sounds less exotic: if I combine data from the > last three years, I find a statistically significant excess of rain in > the fall. But is this just the effect of one very rainy season, that > happened to be in the fall one year, or is it that each fall there is > an excess? The individual years, unfortunately, don't seem to have > enough events to detect significant non-uniformity. > > Anne > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From nbwood at lamar.colostate.edu Wed Oct 28 15:07:01 2009 From: nbwood at lamar.colostate.edu (Norm Wood) Date: Wed, 28 Oct 2009 13:07:01 -0600 Subject: [SciPy-User] scipy.linalg.det TypeError Message-ID: <20091028190701.GA30122@wombat.atmos.colostate.edu> Hi, I'm trying to use the scipy code for kernel density estimation and am running into a problem with the calculation of a determinant. In scipy/linalg/basic.py, the determinant code has a couple of lines: fdet, = get_flinalg_funcs(('det',),(a1,)) a_det,info = fdet(a1,overwrite_a=overwrite_a) These are at lines 484 and 485 in the scipy 0.7.1 basic.py source file. Execution of the second line produces a TypeError: TypeError: 'NoneType' object is not callable I expect this means that the first line failed to "get" the linalg function called 'det'. I'd appreciate any suggestions about why this is happening. The odd thing is that the numpy routine for calculating a determinant works fine. Here's simple code that shows the problem: #!/usr/bin/env /usr/bin/python import numpy import scipy.linalg import numpy.linalg A = numpy.matrix([[1.1, 1.9],[1.9,3.5]]) #This works fine y = numpy.linalg.det(A) #This throws a TypeError y = scipy.linalg.det(A) I'm using LAPACK 3.1.1, Atlas 3.8.0, numpy 1.2.1, and I've tried both scipy 0.7.0 and 0.7.1, all built with gcc 3.3.6 & g77. Platform is Linux (Slackware 10.2), kernel 2.4.31. Python is version 2.4.1, built with gcc 3.3.5 Here's the Atlas build info: ATLAS version 3.8.0 built by norm on Thu Dec 6 17:18:59 MST 2007: UNAME : Linux wombat 2.4.31-1 #1 SMP Wed Dec 7 16:52:41 MST 2005 i686 unknown unknown GNU/Linux INSTFLG : -1 0 -a 1 ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_P4E -DATL_CPUMHZ=3014 -DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_GAS_x8632 F2CDEFS : -DAdd__ -DF77_INTEGER=int -DStringSunStyle CACHEEDGE: 524288 F77 : g77, version GNU Fortran (GCC) 3.3.6 F77FLAGS : -O -m32 SMC : gcc, version gcc (GCC) 3.3.6 SMCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -m32 SKC : gcc, version gcc (GCC) 3.3.6 SKCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -m32 The output of numpy/distutils/system_info.py is pretty long. If there's particular information from that which would be useful, I'll be glad to send it, but it looks to me as if numpy and scipy are using the same atlas libraries. Here's the atlas_info from system_info.py atlas_info: libraries f77blas,cblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries lapack_atlas not found in /home/norm/lusr/lib __main__.atlas_info FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/home/norm/lusr/lib'] language = f77 include_dirs = ['/home/norm/lusr/include'] And here's the atlas_blas_info from the scipy build: atlas_blas_info: libraries f77blas,cblas,atlas not found in /usr/local/lib FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/home/norm/lusr/lib'] language = c include_dirs = ['/home/norm/lusr/include'] Thanks for any suggestions. Norm From josef.pktd at gmail.com Wed Oct 28 15:21:30 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 28 Oct 2009 15:21:30 -0400 Subject: [SciPy-User] Kuiper test (was Re: scipy.stats.fit inquiry) In-Reply-To: References: Message-ID: <1cd32cbb0910281221l8e9838fwa54559d0a937dbbe@mail.gmail.com> On Wed, Oct 28, 2009 at 2:59 PM, Anne Archibald wrote: > I put the Kuiper test (and come other non-uniformity tests) up on github: > http://github.com/aarchiba/kuiper/ > I'd like to clean up the tests somewhat (to reduce the code > duplication between test_kuiper, test_htest, and test_zm2) before > proposing any of it be included in scipy. Also docstrings about the purpose of htest and zm2 would be helpful. I have no idea. zm2 or Zm2 ? In most cases (I think), we switched to non-random random tests in stats by fixing a seed. It won't test any new cases, but in last years discussion that seemed to be the preferred way. Josef > > Anne > > 2009/10/21 Anne Archibald : >> 2009/10/21 ?: >>> When the parameters of the t distribution >>> are estimated, then the test has no power. >>> I don't know if adjusting the critical values >>> would help at least for comparing similar >>> distributions like norm and t. >>> (same problem with kstest) >> >> Yes, this test has its limitations. It's also not as sensitive as one >> might wish for distinguishing multimodal distributions from a >> constant. >> >>> To save on the lambda function, the cdf could take args given as >>> an argument to kuiper. Maybe the specification of the cdf argument >>> could be taken from kstest, but not (the generality of) the rvs argument >>> in kstest. >> >> I loathe interfaces that pass args; I find they make the interface >> more confusing while adding no functionality. But I realize that >> they're standard and not going away (what about the currying >> functionality in new pythons?), so I'll add it. >> >>> Your cdf_from_intervals looks nice for a univariate stepfunction distribution, >>> There are several pieces for this on the mailing list, but I never got around >>> to collecting them. >> >> This was just a quick hack because I needed it; I'm not sure it's >> worth including in scipy.stats, since an appropriately general version >> might be just as difficult to call as to reimplement. >> >>> Your interval functions, I haven't quite figured out. Is fold_intervals >>> supposed to convert a set of overlapping intervals to a non-overlapping >>> partitioning with associated weights? For example for merging >>> 2 histograms with non-identical bounds? >> >> That among other things. The context was that I had observations of an >> X-ray binary, consisting of several different segments with several >> different instruments. So I'd have one observation covering orbital >> phases 0.8 to 1.3 (i.e. 0.5 turns) with effective area 10 cm^2 (say), >> and another covering phases 0.2 to 2.1 (i.e. 1.9 turns) with effective >> area 12 cm^2 (say). So a constant flux from the source would produce a >> non-constant distribution of photons modulo 1; fold_intervals is >> designed to convert those two weighted spans to a collection of >> non-overlapping weighted intervals covering [0,1), for use in a CDF >> for the Kuiper test. ?The histogram stuff is there to allow plotting >> the results in a familiar form. >> >>> I ?haven't figured out the shift invariance. >> >> The idea is just that if you have, say, a collection X of samples from >> [0,1) that you are testing for uniformity, the Kuiper test returns the >> same result for X and for (X+0.3)%1. Since for pulsars there is often >> no natural start time, this is a sensible thing to ask from any test >> for uniformity. >> >>>> I also have some code for the H test (essentially looking at the >>>> Fourier coefficients to find how many harmonics are worth including >>>> and what the significance is; de Jager et al. 1989 "A poweful test for >>>> weak periodic signals with unknown light curve shape in sparse data"). >>>> But circular statistics seem to be a bit obscure, so I'm not sure how >>>> much effort should go into putting this in scipy. >>> >>> For sure they are obscure to me, but there are a few circular descriptive >>> statistics in stats.morestats, and I saw a matlab toolbox on the >>> file exchange. I figured out by now that there are some pretty different >>> statistics used in various fields. >>> I guess, it's all up to you. >> >> To be honest, these are a little obscure even among pulsar >> astronomers; here as elsewhere histograms seem to dominate. >> >>> >From your description (below), I would think, that for circular >>> distribution, we would need different generic functions that >>> don't fit in the current distribution classes, integration on a >>> circle (?) instead of integration on the real line. >> >> Basically, yes. Of course you can view integration on a circle as >> integration on a line parameterizing the circle, but there's no way >> around the fact that the first moment is a complex number which >> contains both position (argument) and concentration (magnitude) >> information. >> >>> If I define boundaries, vonmises.a and vonmises.b, then >>> I think, you would not be able to calculate pdf(x), cdf(x) >>> and so on for x outside of the support [a,b]. I don't know >>> whether it is possible to define a,b but not enforce them in >>> _argcheck and only use them as integration bounds. >> >> It's useful to be able to work with pdf and cdf outside the bounds, >> particularly since the effect of loc is to shift the bounds - so any >> code that takes loc as a free parameter would otherwise have to work >> hard to avoid going outside the bounds. >> >>> I checked briefly, vonmises.moment and vonmises.stats >>> work (integrating over ppf not pdf) >>> generic moment calculation with pdf ?(_mom0_sc) and >>> entropy fail. fit seems to work with the normal random >>> variable, but I thought I got lots of "weird" fit results >>> before. >>> >>> For the real line, this looks "strange" >>>>>> stats.vonmises.pdf(np.linspace(0,3*np.pi,10), 2) >>> array([ 0.51588541, ?0.18978364, ?0.02568442, ?0.00944877, ?0.02568442, >>> ? ? ? ?0.18978364, ?0.51588541, ?0.18978364, ?0.02568442, ?0.00944877]) >>>>>> stats.vonmises.cdf(np.linspace(0,3*np.pi,10), 2) >>> array([ ?0.5 ? ? ? , ? 0.89890776, ? 0.98532805, ? 1. ? ? ? ?, >>> ? ? ? ? 7.28318531, ? 7.28318531, ? 7.28318531, ? 7.28318531, >>> ? ? ? ? 7.28318531, ?13.56637061]) >> >> Unfortunately it seems that my attempt to fix the von Mises >> distribution (using vonmises_cython) went badly awry, and now it >> reports nonsense for values outside -pi..pi. I could have sworn I had >> tests for that. I intend to fix this during the weekend's code sprint. >> "Correct" behaviour (IMHO) would make the CDF the antiderivative of >> the PDF, even though this means it leaves [0,1]. >> >> Incidentally, how would I get nosetests to run only some of the >> distributions tests (ideally just the ones I choose)? When I just do >> "nosetests scipy.stats" it takes *forever*. >> >>> ppf, calculated generically, works and looks only a little bit strange. >>> >>>>>> stats.vonmises.ppf([0, 1e-10,1-1e-10,1],2) >>> array([ ? ? ? -Inf, -3.14159264, ?3.14159264, ? ? ? ? Inf]) >>> >>>> >>>> It's worth defining the boundaries, but I don't think you'll get >>>> useful moment calculations out of it, since circular moments are >>>> defined differently from linear moments: rather than int(x**n*pdf(x)) >>>> they're int(exp(2,j*pi*n*x)*pdf(x)). There are explicit formulas for >>>> these for the von Mises distribution, though, and they're useful >>>> because they are essentially the Fourier coefficients of the pdf. >>>> >>>> For context, I've done some work with representing circular >>>> probability distributions (= pulse profiles) in terms of their moments >>>> (= Fourier series), and in particular using a kernel density estimator >>>> with either a sinc kernel (= truncation) or a von Mises kernel (= >>>> scaling coefficients by von Mises moments). The purpose was to get >>>> not-too-biased estimates of the modulation amplitudes of X-ray >>>> pulsations; the paper has been kind of sidetracked by other work. >>> >>> Sounds like statistics for seasonal time series to me, except you >>> might have a lot more regularity in the repeated pattern then in >>> economics or climate research. >> >> Well, yes and no. Many pulsars have a stable average pulse profile, >> which is what we usually care about, especially where we're in the >> regime of one photon every few tens of thousands of turns. On the >> other hand, when dealing with orbital variability, I ran into a >> statistical question I wasn't sure how to pose: if I add all the >> (partial) orbits of data I have together, I get a distribution that is >> pretty clearly nonconstant. But is that variability actually repeating >> from orbit to orbit, or is it really just random variability? In a >> seasonal context it sounds less exotic: if I combine data from the >> last three years, I find a statistically significant excess of rain in >> the fall. But is this just the effect of one very rainy season, that >> happened to be in the fall one year, or is it that each fall there is >> an excess? The individual years, unfortunately, don't seem to have >> enough events to detect significant non-uniformity. >> >> Anne >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From kmichael.aye at googlemail.com Wed Oct 28 18:46:53 2009 From: kmichael.aye at googlemail.com (Michael Aye) Date: Wed, 28 Oct 2009 15:46:53 -0700 (PDT) Subject: [SciPy-User] table2.fits anybody? Message-ID: Hi! I am working through the very nice interactive data analyis tutorial from here: http://www.scipy.org/wikis/topical_software/Tutorial but all the data downloads (even the older versions for numarray) are missing the table2.fits file. Does anybody have one for me? And also interesting: Am I the only one who ever did go through this tutorial or why nobody ever seemed to have asked for it? :) Best regards, Michael From peridot.faceted at gmail.com Wed Oct 28 22:48:42 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 28 Oct 2009 22:48:42 -0400 Subject: [SciPy-User] Kuiper test (was Re: scipy.stats.fit inquiry) In-Reply-To: <1cd32cbb0910281221l8e9838fwa54559d0a937dbbe@mail.gmail.com> References: <1cd32cbb0910281221l8e9838fwa54559d0a937dbbe@mail.gmail.com> Message-ID: 2009/10/28 : > On Wed, Oct 28, 2009 at 2:59 PM, Anne Archibald > wrote: >> I put the Kuiper test (and come other non-uniformity tests) up on github: >> http://github.com/aarchiba/kuiper/ >> I'd like to clean up the tests somewhat (to reduce the code >> duplication between test_kuiper, test_htest, and test_zm2) before >> proposing any of it be included in scipy. > > Also docstrings about the purpose of htest and zm2 would be > helpful. I have no idea. Ahem. Yes, the code was pretty impenetrable without them. Sorry about that! I lose my (few) good coding habits very easily when there's a proposal due. > zm2 or Zm2 ? That's a good question. The test is usually written in LaTeX as $Z_m^2$, so in a sense Zm2 would be more natural, but it's not pep8 (and I'm really used to caps indicating a class by now). > In most cases (I think), we switched to non-random random tests in > stats by fixing a seed. It won't test any new cases, but in last years > discussion that seemed to be the preferred way. That's a good point too. Do you set the seed in a per-function setup, or just once per module? The latter seems simpler and saner but in principle it introduces interrelations between tests. How deterministic is nosetests? (e.g. does it always run all the tests in a file in the same order no matter how or when it runs them?) I guess running single tests will give them different random numbers than running the module as a whole... anyway, seeding once per module should be enough to avoid end users being bitten by statistical failures. Anne > Josef > >> >> Anne >> >> 2009/10/21 Anne Archibald : >>> 2009/10/21 ?: >>>> When the parameters of the t distribution >>>> are estimated, then the test has no power. >>>> I don't know if adjusting the critical values >>>> would help at least for comparing similar >>>> distributions like norm and t. >>>> (same problem with kstest) >>> >>> Yes, this test has its limitations. It's also not as sensitive as one >>> might wish for distinguishing multimodal distributions from a >>> constant. >>> >>>> To save on the lambda function, the cdf could take args given as >>>> an argument to kuiper. Maybe the specification of the cdf argument >>>> could be taken from kstest, but not (the generality of) the rvs argument >>>> in kstest. >>> >>> I loathe interfaces that pass args; I find they make the interface >>> more confusing while adding no functionality. But I realize that >>> they're standard and not going away (what about the currying >>> functionality in new pythons?), so I'll add it. >>> >>>> Your cdf_from_intervals looks nice for a univariate stepfunction distribution, >>>> There are several pieces for this on the mailing list, but I never got around >>>> to collecting them. >>> >>> This was just a quick hack because I needed it; I'm not sure it's >>> worth including in scipy.stats, since an appropriately general version >>> might be just as difficult to call as to reimplement. >>> >>>> Your interval functions, I haven't quite figured out. Is fold_intervals >>>> supposed to convert a set of overlapping intervals to a non-overlapping >>>> partitioning with associated weights? For example for merging >>>> 2 histograms with non-identical bounds? >>> >>> That among other things. The context was that I had observations of an >>> X-ray binary, consisting of several different segments with several >>> different instruments. So I'd have one observation covering orbital >>> phases 0.8 to 1.3 (i.e. 0.5 turns) with effective area 10 cm^2 (say), >>> and another covering phases 0.2 to 2.1 (i.e. 1.9 turns) with effective >>> area 12 cm^2 (say). So a constant flux from the source would produce a >>> non-constant distribution of photons modulo 1; fold_intervals is >>> designed to convert those two weighted spans to a collection of >>> non-overlapping weighted intervals covering [0,1), for use in a CDF >>> for the Kuiper test. ?The histogram stuff is there to allow plotting >>> the results in a familiar form. >>> >>>> I ?haven't figured out the shift invariance. >>> >>> The idea is just that if you have, say, a collection X of samples from >>> [0,1) that you are testing for uniformity, the Kuiper test returns the >>> same result for X and for (X+0.3)%1. Since for pulsars there is often >>> no natural start time, this is a sensible thing to ask from any test >>> for uniformity. >>> >>>>> I also have some code for the H test (essentially looking at the >>>>> Fourier coefficients to find how many harmonics are worth including >>>>> and what the significance is; de Jager et al. 1989 "A poweful test for >>>>> weak periodic signals with unknown light curve shape in sparse data"). >>>>> But circular statistics seem to be a bit obscure, so I'm not sure how >>>>> much effort should go into putting this in scipy. >>>> >>>> For sure they are obscure to me, but there are a few circular descriptive >>>> statistics in stats.morestats, and I saw a matlab toolbox on the >>>> file exchange. I figured out by now that there are some pretty different >>>> statistics used in various fields. >>>> I guess, it's all up to you. >>> >>> To be honest, these are a little obscure even among pulsar >>> astronomers; here as elsewhere histograms seem to dominate. >>> >>>> >From your description (below), I would think, that for circular >>>> distribution, we would need different generic functions that >>>> don't fit in the current distribution classes, integration on a >>>> circle (?) instead of integration on the real line. >>> >>> Basically, yes. Of course you can view integration on a circle as >>> integration on a line parameterizing the circle, but there's no way >>> around the fact that the first moment is a complex number which >>> contains both position (argument) and concentration (magnitude) >>> information. >>> >>>> If I define boundaries, vonmises.a and vonmises.b, then >>>> I think, you would not be able to calculate pdf(x), cdf(x) >>>> and so on for x outside of the support [a,b]. I don't know >>>> whether it is possible to define a,b but not enforce them in >>>> _argcheck and only use them as integration bounds. >>> >>> It's useful to be able to work with pdf and cdf outside the bounds, >>> particularly since the effect of loc is to shift the bounds - so any >>> code that takes loc as a free parameter would otherwise have to work >>> hard to avoid going outside the bounds. >>> >>>> I checked briefly, vonmises.moment and vonmises.stats >>>> work (integrating over ppf not pdf) >>>> generic moment calculation with pdf ?(_mom0_sc) and >>>> entropy fail. fit seems to work with the normal random >>>> variable, but I thought I got lots of "weird" fit results >>>> before. >>>> >>>> For the real line, this looks "strange" >>>>>>> stats.vonmises.pdf(np.linspace(0,3*np.pi,10), 2) >>>> array([ 0.51588541, ?0.18978364, ?0.02568442, ?0.00944877, ?0.02568442, >>>> ? ? ? ?0.18978364, ?0.51588541, ?0.18978364, ?0.02568442, ?0.00944877]) >>>>>>> stats.vonmises.cdf(np.linspace(0,3*np.pi,10), 2) >>>> array([ ?0.5 ? ? ? , ? 0.89890776, ? 0.98532805, ? 1. ? ? ? ?, >>>> ? ? ? ? 7.28318531, ? 7.28318531, ? 7.28318531, ? 7.28318531, >>>> ? ? ? ? 7.28318531, ?13.56637061]) >>> >>> Unfortunately it seems that my attempt to fix the von Mises >>> distribution (using vonmises_cython) went badly awry, and now it >>> reports nonsense for values outside -pi..pi. I could have sworn I had >>> tests for that. I intend to fix this during the weekend's code sprint. >>> "Correct" behaviour (IMHO) would make the CDF the antiderivative of >>> the PDF, even though this means it leaves [0,1]. >>> >>> Incidentally, how would I get nosetests to run only some of the >>> distributions tests (ideally just the ones I choose)? When I just do >>> "nosetests scipy.stats" it takes *forever*. >>> >>>> ppf, calculated generically, works and looks only a little bit strange. >>>> >>>>>>> stats.vonmises.ppf([0, 1e-10,1-1e-10,1],2) >>>> array([ ? ? ? -Inf, -3.14159264, ?3.14159264, ? ? ? ? Inf]) >>>> >>>>> >>>>> It's worth defining the boundaries, but I don't think you'll get >>>>> useful moment calculations out of it, since circular moments are >>>>> defined differently from linear moments: rather than int(x**n*pdf(x)) >>>>> they're int(exp(2,j*pi*n*x)*pdf(x)). There are explicit formulas for >>>>> these for the von Mises distribution, though, and they're useful >>>>> because they are essentially the Fourier coefficients of the pdf. >>>>> >>>>> For context, I've done some work with representing circular >>>>> probability distributions (= pulse profiles) in terms of their moments >>>>> (= Fourier series), and in particular using a kernel density estimator >>>>> with either a sinc kernel (= truncation) or a von Mises kernel (= >>>>> scaling coefficients by von Mises moments). The purpose was to get >>>>> not-too-biased estimates of the modulation amplitudes of X-ray >>>>> pulsations; the paper has been kind of sidetracked by other work. >>>> >>>> Sounds like statistics for seasonal time series to me, except you >>>> might have a lot more regularity in the repeated pattern then in >>>> economics or climate research. >>> >>> Well, yes and no. Many pulsars have a stable average pulse profile, >>> which is what we usually care about, especially where we're in the >>> regime of one photon every few tens of thousands of turns. On the >>> other hand, when dealing with orbital variability, I ran into a >>> statistical question I wasn't sure how to pose: if I add all the >>> (partial) orbits of data I have together, I get a distribution that is >>> pretty clearly nonconstant. But is that variability actually repeating >>> from orbit to orbit, or is it really just random variability? In a >>> seasonal context it sounds less exotic: if I combine data from the >>> last three years, I find a statistically significant excess of rain in >>> the fall. But is this just the effect of one very rainy season, that >>> happened to be in the fall one year, or is it that each fall there is >>> an excess? The individual years, unfortunately, don't seem to have >>> enough events to detect significant non-uniformity. >>> >>> Anne >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Thu Oct 29 01:49:07 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 29 Oct 2009 01:49:07 -0400 Subject: [SciPy-User] Kuiper test (was Re: scipy.stats.fit inquiry) In-Reply-To: References: <1cd32cbb0910281221l8e9838fwa54559d0a937dbbe@mail.gmail.com> Message-ID: <1cd32cbb0910282249i78195fc6xd34491ee145944b8@mail.gmail.com> On Wed, Oct 28, 2009 at 10:48 PM, Anne Archibald wrote: > 2009/10/28 ?: >> On Wed, Oct 28, 2009 at 2:59 PM, Anne Archibald >> wrote: >>> I put the Kuiper test (and come other non-uniformity tests) up on github: >>> http://github.com/aarchiba/kuiper/ >>> I'd like to clean up the tests somewhat (to reduce the code >>> duplication between test_kuiper, test_htest, and test_zm2) before >>> proposing any of it be included in scipy. >> >> Also docstrings about the purpose of htest and zm2 would be >> helpful. I have no idea. > > Ahem. Yes, the code was pretty impenetrable without them. Sorry about > that! I lose my (few) good coding habits very easily when there's a > proposal due. > >> zm2 or Zm2 ? > > That's a good question. The test is usually written in LaTeX as > $Z_m^2$, so in a sense Zm2 would be more natural, but it's not pep8 > (and I'm really used to caps indicating a class by now). I prefer zm2, although in numpy/scipy, pep8 capitalization is not really well observed, given all the classes that pretend to be functions. > >> In most cases (I think), we switched to non-random random tests in >> stats by fixing a seed. It won't test any new cases, but in last years >> discussion that seemed to be the preferred way. > > That's a good point too. Do you set the seed in a per-function setup, > or just once per module? The latter seems simpler and saner but in > principle it introduces interrelations between tests. How > deterministic is nosetests? (e.g. does it always run all the tests in > a file in the same order no matter how or when it runs them?) I guess > running single tests will give them different random numbers than > running the module as a whole... anyway, seeding once per module > should be enough to avoid end users being bitten by statistical > failures. I just checked: the original tests for random number generation still don't use a random seed, but they are not very strict. All new tests, for the other distributions methods, have a seed directly before every call to random, so they are completely deterministic. The tests for kstest have two simple examples verified against R and some regression tests with seeded random numbers and also deterministic outcome. I haven't written any tests that are truly random for inclusion in a testsuite in a while, but I use them during development because they are easy to write. nose sorts the tests, I think, alphabetically, so a seed in the module would still be deterministic with the same random numbers, as long as no new tests are added, and there is no selection of tests, e.g. run only specific tests. Josef BTW: I looked at some graphs of pulse profiles and they look really like seasonal time series to me. But I never thought of forecasting the load (demand) on an electricity grid for the next day in half hour intervals as based on astronomical phenomena. I didn't see whether pulsars have special events like a soccer world cup in the middle of the night. > > Anne > >> Josef >> >>> >>> Anne >>> >>> 2009/10/21 Anne Archibald : >>>> 2009/10/21 ?: >>>>> When the parameters of the t distribution >>>>> are estimated, then the test has no power. >>>>> I don't know if adjusting the critical values >>>>> would help at least for comparing similar >>>>> distributions like norm and t. >>>>> (same problem with kstest) >>>> >>>> Yes, this test has its limitations. It's also not as sensitive as one >>>> might wish for distinguishing multimodal distributions from a >>>> constant. >>>> >>>>> To save on the lambda function, the cdf could take args given as >>>>> an argument to kuiper. Maybe the specification of the cdf argument >>>>> could be taken from kstest, but not (the generality of) the rvs argument >>>>> in kstest. >>>> >>>> I loathe interfaces that pass args; I find they make the interface >>>> more confusing while adding no functionality. But I realize that >>>> they're standard and not going away (what about the currying >>>> functionality in new pythons?), so I'll add it. >>>> >>>>> Your cdf_from_intervals looks nice for a univariate stepfunction distribution, >>>>> There are several pieces for this on the mailing list, but I never got around >>>>> to collecting them. >>>> >>>> This was just a quick hack because I needed it; I'm not sure it's >>>> worth including in scipy.stats, since an appropriately general version >>>> might be just as difficult to call as to reimplement. >>>> >>>>> Your interval functions, I haven't quite figured out. Is fold_intervals >>>>> supposed to convert a set of overlapping intervals to a non-overlapping >>>>> partitioning with associated weights? For example for merging >>>>> 2 histograms with non-identical bounds? >>>> >>>> That among other things. The context was that I had observations of an >>>> X-ray binary, consisting of several different segments with several >>>> different instruments. So I'd have one observation covering orbital >>>> phases 0.8 to 1.3 (i.e. 0.5 turns) with effective area 10 cm^2 (say), >>>> and another covering phases 0.2 to 2.1 (i.e. 1.9 turns) with effective >>>> area 12 cm^2 (say). So a constant flux from the source would produce a >>>> non-constant distribution of photons modulo 1; fold_intervals is >>>> designed to convert those two weighted spans to a collection of >>>> non-overlapping weighted intervals covering [0,1), for use in a CDF >>>> for the Kuiper test. ?The histogram stuff is there to allow plotting >>>> the results in a familiar form. >>>> >>>>> I ?haven't figured out the shift invariance. >>>> >>>> The idea is just that if you have, say, a collection X of samples from >>>> [0,1) that you are testing for uniformity, the Kuiper test returns the >>>> same result for X and for (X+0.3)%1. Since for pulsars there is often >>>> no natural start time, this is a sensible thing to ask from any test >>>> for uniformity. >>>> >>>>>> I also have some code for the H test (essentially looking at the >>>>>> Fourier coefficients to find how many harmonics are worth including >>>>>> and what the significance is; de Jager et al. 1989 "A poweful test for >>>>>> weak periodic signals with unknown light curve shape in sparse data"). >>>>>> But circular statistics seem to be a bit obscure, so I'm not sure how >>>>>> much effort should go into putting this in scipy. >>>>> >>>>> For sure they are obscure to me, but there are a few circular descriptive >>>>> statistics in stats.morestats, and I saw a matlab toolbox on the >>>>> file exchange. I figured out by now that there are some pretty different >>>>> statistics used in various fields. >>>>> I guess, it's all up to you. >>>> >>>> To be honest, these are a little obscure even among pulsar >>>> astronomers; here as elsewhere histograms seem to dominate. >>>> >>>>> >From your description (below), I would think, that for circular >>>>> distribution, we would need different generic functions that >>>>> don't fit in the current distribution classes, integration on a >>>>> circle (?) instead of integration on the real line. >>>> >>>> Basically, yes. Of course you can view integration on a circle as >>>> integration on a line parameterizing the circle, but there's no way >>>> around the fact that the first moment is a complex number which >>>> contains both position (argument) and concentration (magnitude) >>>> information. >>>> >>>>> If I define boundaries, vonmises.a and vonmises.b, then >>>>> I think, you would not be able to calculate pdf(x), cdf(x) >>>>> and so on for x outside of the support [a,b]. I don't know >>>>> whether it is possible to define a,b but not enforce them in >>>>> _argcheck and only use them as integration bounds. >>>> >>>> It's useful to be able to work with pdf and cdf outside the bounds, >>>> particularly since the effect of loc is to shift the bounds - so any >>>> code that takes loc as a free parameter would otherwise have to work >>>> hard to avoid going outside the bounds. >>>> >>>>> I checked briefly, vonmises.moment and vonmises.stats >>>>> work (integrating over ppf not pdf) >>>>> generic moment calculation with pdf ?(_mom0_sc) and >>>>> entropy fail. fit seems to work with the normal random >>>>> variable, but I thought I got lots of "weird" fit results >>>>> before. >>>>> >>>>> For the real line, this looks "strange" >>>>>>>> stats.vonmises.pdf(np.linspace(0,3*np.pi,10), 2) >>>>> array([ 0.51588541, ?0.18978364, ?0.02568442, ?0.00944877, ?0.02568442, >>>>> ? ? ? ?0.18978364, ?0.51588541, ?0.18978364, ?0.02568442, ?0.00944877]) >>>>>>>> stats.vonmises.cdf(np.linspace(0,3*np.pi,10), 2) >>>>> array([ ?0.5 ? ? ? , ? 0.89890776, ? 0.98532805, ? 1. ? ? ? ?, >>>>> ? ? ? ? 7.28318531, ? 7.28318531, ? 7.28318531, ? 7.28318531, >>>>> ? ? ? ? 7.28318531, ?13.56637061]) >>>> >>>> Unfortunately it seems that my attempt to fix the von Mises >>>> distribution (using vonmises_cython) went badly awry, and now it >>>> reports nonsense for values outside -pi..pi. I could have sworn I had >>>> tests for that. I intend to fix this during the weekend's code sprint. >>>> "Correct" behaviour (IMHO) would make the CDF the antiderivative of >>>> the PDF, even though this means it leaves [0,1]. >>>> >>>> Incidentally, how would I get nosetests to run only some of the >>>> distributions tests (ideally just the ones I choose)? When I just do >>>> "nosetests scipy.stats" it takes *forever*. >>>> >>>>> ppf, calculated generically, works and looks only a little bit strange. >>>>> >>>>>>>> stats.vonmises.ppf([0, 1e-10,1-1e-10,1],2) >>>>> array([ ? ? ? -Inf, -3.14159264, ?3.14159264, ? ? ? ? Inf]) >>>>> >>>>>> >>>>>> It's worth defining the boundaries, but I don't think you'll get >>>>>> useful moment calculations out of it, since circular moments are >>>>>> defined differently from linear moments: rather than int(x**n*pdf(x)) >>>>>> they're int(exp(2,j*pi*n*x)*pdf(x)). There are explicit formulas for >>>>>> these for the von Mises distribution, though, and they're useful >>>>>> because they are essentially the Fourier coefficients of the pdf. >>>>>> >>>>>> For context, I've done some work with representing circular >>>>>> probability distributions (= pulse profiles) in terms of their moments >>>>>> (= Fourier series), and in particular using a kernel density estimator >>>>>> with either a sinc kernel (= truncation) or a von Mises kernel (= >>>>>> scaling coefficients by von Mises moments). The purpose was to get >>>>>> not-too-biased estimates of the modulation amplitudes of X-ray >>>>>> pulsations; the paper has been kind of sidetracked by other work. >>>>> >>>>> Sounds like statistics for seasonal time series to me, except you >>>>> might have a lot more regularity in the repeated pattern then in >>>>> economics or climate research. >>>> >>>> Well, yes and no. Many pulsars have a stable average pulse profile, >>>> which is what we usually care about, especially where we're in the >>>> regime of one photon every few tens of thousands of turns. On the >>>> other hand, when dealing with orbital variability, I ran into a >>>> statistical question I wasn't sure how to pose: if I add all the >>>> (partial) orbits of data I have together, I get a distribution that is >>>> pretty clearly nonconstant. But is that variability actually repeating >>>> from orbit to orbit, or is it really just random variability? In a >>>> seasonal context it sounds less exotic: if I combine data from the >>>> last three years, I find a statistically significant excess of rain in >>>> the fall. But is this just the effect of one very rainy season, that >>>> happened to be in the fall one year, or is it that each fall there is >>>> an excess? The individual years, unfortunately, don't seem to have >>>> enough events to detect significant non-uniformity. >>>> >>>> Anne >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From peridot.faceted at gmail.com Thu Oct 29 04:22:12 2009 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 29 Oct 2009 04:22:12 -0400 Subject: [SciPy-User] Kuiper test (was Re: scipy.stats.fit inquiry) In-Reply-To: <1cd32cbb0910282249i78195fc6xd34491ee145944b8@mail.gmail.com> References: <1cd32cbb0910281221l8e9838fwa54559d0a937dbbe@mail.gmail.com> <1cd32cbb0910282249i78195fc6xd34491ee145944b8@mail.gmail.com> Message-ID: 2009/10/29 : > On Wed, Oct 28, 2009 at 10:48 PM, Anne Archibald > wrote: >> 2009/10/28 ?: >>> zm2 or Zm2 ? >> >> That's a good question. The test is usually written in LaTeX as >> $Z_m^2$, so in a sense Zm2 would be more natural, but it's not pep8 >> (and I'm really used to caps indicating a class by now). > > I prefer zm2, although in numpy/scipy, pep8 capitalization is not really > well observed, given all the classes that pretend to be functions. So do I; not sure why I had it the other way. > I just checked: the original tests for random number generation still don't > use a random seed, but they are not very strict. > All new tests, for the other distributions methods, have a seed directly > before every call to random, so they are completely deterministic. > The tests for kstest have two simple examples verified against R and > some regression tests with seeded random numbers and also deterministic > outcome. > I haven't written any tests that are truly random for inclusion in a > testsuite in a while, but I use them during development because > they are easy to write. I wrote a "seed" decorator that makes per-test seeding easy. Just make sure it goes *outside* the double_check decorator... I think the pseudorandomness is a good way to test things like the Kuiper false positive probability. > BTW: I looked at some graphs of pulse profiles and they look really > like seasonal time series to me. But I never thought of forecasting the > load (demand) on an electricity grid for the next day in half hour > intervals as based on astronomical phenomena. I didn't see whether > pulsars have special events like a soccer world cup in the middle > of the night. Heh. In a way, yes. Those profiles you saw are probably *average* pulse profiles, and in fact in those (few) cases where we have enough signal-to-noise to look at individual pulses, they are often distressingly variable. The most astonishing example is perhaps the Crab pulsar, in which there are often "giant pulses", in which a region less than a meter across blasts out a single pulse often bright enough to see on your (analog) TV if you knew when to look. These pulses vary on brightness by several orders of magnitude. Even for average pulsars, there are many that undergo "outages" anything from one pulse long to several weeks long; some pulsars flip back and forth between two or more different "modes" with different pulse profiles, and it looks like maybe most pulsars' average profiles are actually just the envelope of shorter "drifting subpulses" that drift systematically or randomly in phase. Most of this weirdness is seen in radio, though, where the emission is a negligible fraction of a pulsar's power budget (though the one that shuts off for weeks at a time actually spins down faster when on, by something like 50%). In the X-rays things are mostly a little better (though the anomalous X-ray pulsars show millisecond-long bursts, gradual profile changes, and long-term "flares", among other peculiar behaviour) and the gamma rays look like an even better handle on the emission physics. Both X-rays and gamma rays are photon-starved (Fermi gets a photon every few days from a typical source, I think), so maybe we're just not seeing the weirdness. Photon counting does at least make for a nice clean statistical problem. Hence the various periodicity tests. Anne From d.l.goldsmith at gmail.com Thu Oct 29 04:58:48 2009 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 29 Oct 2009 01:58:48 -0700 Subject: [SciPy-User] scipy.linalg.det TypeError In-Reply-To: <20091028190701.GA30122@wombat.atmos.colostate.edu> References: <20091028190701.GA30122@wombat.atmos.colostate.edu> Message-ID: <45d1ab480910290158u7d274687t737a27690fd08497@mail.gmail.com> Uncertain why you're having a problem - your sample code works for me: >>> import scipy.linalg >>> import numpy.linalg >>> A=np.matrix([[1.1, 1.9],[1.9,3.5]]) >>> y = numpy.linalg.det(A); y 0.23999999999999988 >>> y = scipy.linalg.det(A); y 0.23999999999999988 >>> scipy.__version__ '0.7.1' >>> np.__version__ '1.3.0rc2' Python 2.5 on Windoze Vista HPE DG On Wed, Oct 28, 2009 at 12:07 PM, Norm Wood wrote: > > fine. Here's simple code that shows the problem: > > #!/usr/bin/env /usr/bin/python > > import numpy > import scipy.linalg > import numpy.linalg > > A = numpy.matrix([[1.1, 1.9],[1.9,3.5]]) > #This works fine > y = numpy.linalg.det(A) > #This throws a TypeError > y = scipy.linalg.det(A) > > I'm using LAPACK 3.1.1, Atlas 3.8.0, numpy 1.2.1, and I've tried both > scipy 0.7.0 and 0.7.1, all built with gcc 3.3.6 & g77. > > Platform is Linux (Slackware 10.2), kernel 2.4.31. > Python is version 2.4.1, built with gcc 3.3.5 > > Here's the Atlas build info: > ATLAS version 3.8.0 built by norm on Thu Dec 6 17:18:59 MST 2007: > UNAME : Linux wombat 2.4.31-1 #1 SMP Wed Dec 7 16:52:41 MST 2005 i686 > unknown unknown GNU/Linux > INSTFLG : -1 0 -a 1 > ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_P4E -DATL_CPUMHZ=3014 -DATL_SSE3 > -DATL_SSE2 -DATL_SSE1 -DATL_GAS_x8632 > F2CDEFS : -DAdd__ -DF77_INTEGER=int -DStringSunStyle > CACHEEDGE: 524288 > F77 : g77, version GNU Fortran (GCC) 3.3.6 > F77FLAGS : -O -m32 > SMC : gcc, version gcc (GCC) 3.3.6 > SMCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -m32 > SKC : gcc, version gcc (GCC) 3.3.6 > SKCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -m32 > > > The output of numpy/distutils/system_info.py is pretty long. If there's > particular information from that which would be useful, I'll be glad to > send it, but it looks to me as if numpy and scipy are using the same > atlas libraries. Here's the atlas_info from system_info.py > > atlas_info: > libraries f77blas,cblas,atlas not found in /usr/local/lib > libraries lapack_atlas not found in /usr/local/lib > libraries lapack_atlas not found in /home/norm/lusr/lib > __main__.atlas_info > FOUND: > libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] > library_dirs = ['/home/norm/lusr/lib'] > language = f77 > include_dirs = ['/home/norm/lusr/include'] > > > And here's the atlas_blas_info from the scipy build: > > atlas_blas_info: > libraries f77blas,cblas,atlas not found in /usr/local/lib > FOUND: > libraries = ['f77blas', 'cblas', 'atlas'] > library_dirs = ['/home/norm/lusr/lib'] > language = c > include_dirs = ['/home/norm/lusr/include'] > > > Thanks for any suggestions. > > > Norm > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From perry at stsci.edu Thu Oct 29 07:38:56 2009 From: perry at stsci.edu (Perry Greenfield) Date: Thu, 29 Oct 2009 07:38:56 -0400 Subject: [SciPy-User] table2.fits anybody? In-Reply-To: References: Message-ID: I can't recall anyone ever pointing that out before (lots of people do go through the tutorial; I imagine that most just didn't bother to deal with the omission). I will email it to you. Thanks for letting us know. We'll update the downloads as well. Perry On Oct 28, 2009, at 6:46 PM, Michael Aye wrote: > Hi! > I am working through the very nice interactive data analyis tutorial > from here: > http://www.scipy.org/wikis/topical_software/Tutorial > but all the data downloads (even the older versions for numarray) are > missing the table2.fits file. > > Does anybody have one for me? And also interesting: Am I the only one > who ever did go through this tutorial or why nobody ever seemed to > have asked for it? :) > > Best regards, > Michael > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From josephsmidt at gmail.com Thu Oct 29 09:58:27 2009 From: josephsmidt at gmail.com (Joseph Smidt) Date: Thu, 29 Oct 2009 06:58:27 -0700 Subject: [SciPy-User] How Can I Bin A Matrix? Message-ID: <142682e10910290658n35443430v40925d7ec651706d@mail.gmail.com> Hello, Lets pretend I have some random 100x100 matrix and I wanted to form a 10x10 matrix where each element of the 10x10 matrix is the average of the corresponding 10x10 block of the 100x100 matrix. To make this clearer, lets suppose I have a 4x4 matrix: ( 7, 2, 3, 4 ) ( 9, 4, 5, 6 ) ( 3, 5, 7, 9 ) ( 1, 5, 2, 6 ) and lets say I want to bin it to a 2x2 matrix meaning I want to create a 2x2 matrix which would be: ( 5.5, 4.5 ) ( 3.5, 6.0 ) where 5.5 is the average of the upper left block of the 16x16 matrix: ( 7, 2 ) ( 9, 4 ) and similarly with the other elements. Anyways, given an arbitrary NxN matrix is the an easy way to bin it to an MxM matrix where N is divisible by M? If someone could come up with code to do this I would be very grateful. Joseph Smidt -- ------------------------------------------------------------------------ Joseph Smidt Physics and Astronomy 4129 Frederick Reines Hall Irvine, CA 92697-4575 Office: 949-824-3269 From cournape at gmail.com Thu Oct 29 10:09:08 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 29 Oct 2009 23:09:08 +0900 Subject: [SciPy-User] How Can I Bin A Matrix? In-Reply-To: <142682e10910290658n35443430v40925d7ec651706d@mail.gmail.com> References: <142682e10910290658n35443430v40925d7ec651706d@mail.gmail.com> Message-ID: <5b8d13220910290709n6b7386f5w6d440218eca2ce8c@mail.gmail.com> On Thu, Oct 29, 2009 at 10:58 PM, Joseph Smidt wrote: > Hello, > > ? ? Lets pretend I have some random 100x100 matrix and I wanted to > form a 10x10 matrix where each element of the 10x10 matrix is the > average of the corresponding 10x10 ?block of the 100x100 matrix. you could simply add submatrices for each item of a submatrix. To take your example with 2x2 submatrices: 0.25 * (x[::2,::2] + x[1::2, ::2] + x[::2, 1::2] + x[1::2,1::2]) David From tim.scipy at tropic.org.uk Thu Oct 29 10:47:58 2009 From: tim.scipy at tropic.org.uk (Tim Goodsall) Date: Thu, 29 Oct 2009 14:47:58 +0000 Subject: [SciPy-User] How Can I Bin A Matrix? In-Reply-To: <20091029144546.GA16349@tropic.org.uk> References: <142682e10910290658n35443430v40925d7ec651706d@mail.gmail.com> <20091029144546.GA16349@tropic.org.uk> Message-ID: <20091029144758.GB16349@tropic.org.uk> yo, this works I think. Not sure how fast it is on anything big. and it lops off some data if your binfactor is not a factor of the image dimensions. this might be a problem for you... asymmetric binning is more fun. def spatiallybin(imap, binfactor): lx,ly = imap.shape # arbitrarily clip off some data if the dimensions not a multiple of # the binning factor. imap = imap[0:lx-(lx%binfactor)] imap = imap[:,0:ly-ly%binfactor] lx,ly = imap.shape x,y = S.mgrid[0:lx:binfactor, 0:ly:binfactor] x2,y2 = S.mgrid[0:lx/binfactor, 0:ly/binfactor] # doubt this is the fastest way to do it for massive arrays. new = S.array([imap[x+i,y+j] for i in range(binfactor) for j in range(binfactor)]).sum(0) return new On Thu, 2009-10-29 06:58, Joseph Smidt wrote: > Hello, > > Lets pretend I have some random 100x100 matrix and I wanted to > form a 10x10 matrix where each element of the 10x10 matrix is the > average of the corresponding 10x10 block of the 100x100 matrix. > > To make this clearer, lets suppose I have a 4x4 matrix: > > ( 7, 2, 3, 4 ) > ( 9, 4, 5, 6 ) > ( 3, 5, 7, 9 ) > ( 1, 5, 2, 6 ) > > and lets say I want to bin it to a 2x2 matrix meaning I want to > create a 2x2 matrix which would be: > > ( 5.5, 4.5 ) > ( 3.5, 6.0 ) > > where 5.5 is the average of the upper left block of the 16x16 matrix: > > ( 7, 2 ) > ( 9, 4 ) > > and similarly with the other elements. > > Anyways, given an arbitrary NxN matrix is the an easy way to bin it > to an MxM matrix where N is divisible by M? If someone could come up > with code to do this I would be very grateful. > > Joseph Smidt > > -- > ------------------------------------------------------------------------ > Joseph Smidt > > Physics and Astronomy > 4129 Frederick Reines Hall > Irvine, CA 92697-4575 > Office: 949-824-3269 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From josef.pktd at gmail.com Thu Oct 29 10:55:06 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 29 Oct 2009 10:55:06 -0400 Subject: [SciPy-User] How Can I Bin A Matrix? In-Reply-To: <20091029144758.GB16349@tropic.org.uk> References: <142682e10910290658n35443430v40925d7ec651706d@mail.gmail.com> <20091029144546.GA16349@tropic.org.uk> <20091029144758.GB16349@tropic.org.uk> Message-ID: <1cd32cbb0910290755u5274f072u737d8955055cc4e2@mail.gmail.com> On Thu, Oct 29, 2009 at 10:47 AM, Tim Goodsall wrote: > ?yo, > > > ?this works I think. ?Not sure how fast it is on anything big. ?and it > ?lops off some data if your binfactor is not a factor of the image > ?dimensions. ?this might be a problem for you... ?asymmetric binning > ?is more fun. > > ?def spatiallybin(imap, binfactor): > > ? ? lx,ly = imap.shape > > ? ? ? ?# arbitrarily clip off some data if the dimensions not a multiple of > ? ? ? ?# the binning factor. > ? ? ? ?imap = imap[0:lx-(lx%binfactor)] > ? ? ? ?imap = imap[:,0:ly-ly%binfactor] > > ? ? ? ?lx,ly = imap.shape > > ? ? ? ?x,y = S.mgrid[0:lx:binfactor, 0:ly:binfactor] > ? ? ? ?x2,y2 = S.mgrid[0:lx/binfactor, 0:ly/binfactor] > > ? ? ? ?# doubt this is the fastest way to do it for massive arrays. > ? ? ? ?new = S.array([imap[x+i,y+j] for i in range(binfactor) for j in range(binfactor)]).sum(0) > > ? ? return new > > > > > > ?On Thu, 2009-10-29 06:58, Joseph Smidt wrote: > ?> Hello, > ?> > ?> ? ? ?Lets pretend I have some random 100x100 matrix and I wanted to > ?> form a 10x10 matrix where each element of the 10x10 matrix is the > ?> average of the corresponding 10x10 ?block of the 100x100 matrix. > ?> > ?> ? ? To make this clearer, lets suppose I have a 4x4 matrix: > ?> > ?> ( 7, 2, 3, 4 ) > ?> ( 9, 4, 5, 6 ) > ?> ( 3, 5, 7, 9 ) > ?> ( 1, 5, 2, 6 ) > ?> > ?> ? ? and lets say I want to bin it to a 2x2 matrix meaning I want to > ?> create a 2x2 matrix which would be: > ?> > ?> ( ?5.5, 4.5 ) > ?> ( ?3.5, 6.0 ) > ?> > ?> where 5.5 is the average of the upper left block of the 16x16 matrix: > ?> > ?> ( 7, 2 ) > ?> ( 9, 4 ) > ?> > ?> ? ?and similarly with ?the other elements. > ?> > ?> ? ?Anyways, given an arbitrary NxN matrix is the an easy way to bin it > ?> to an MxM matrix where N is divisible by M? ?If someone could come up > ?> with code to do this I would be very grateful. > ?> > ?> ? ? ? ? ? ? ? ? ? Joseph Smidt > ?> I would try a full convolution with an average filter (scipy.signal or scipy.ndimage ?) and then just select the right elements in the convoluted array. Not tried. Josef > ?> -- > ?> ------------------------------------------------------------------------ > ?> Joseph Smidt > ?> > ?> Physics and Astronomy > ?> 4129 Frederick Reines Hall > ?> Irvine, CA 92697-4575 > ?> Office: 949-824-3269 > ?> _______________________________________________ > ?> SciPy-User mailing list > ?> SciPy-User at scipy.org > ?> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From fredmfp at gmail.com Thu Oct 29 11:09:49 2009 From: fredmfp at gmail.com (fred) Date: Thu, 29 Oct 2009 16:09:49 +0100 Subject: [SciPy-User] computing local extrema In-Reply-To: <666813.16131.qm@web33003.mail.mud.yahoo.com> References: <4A951D60.7010104@gmail.com> <469625.15621.qm@web33004.mail.mud.yahoo.com> <666813.16131.qm@web33003.mail.mud.yahoo.com> Message-ID: <4AE9B03D.7070909@gmail.com> I'm back to this thread. Just to say I thank you all. I took the Zachary's approach, which fits best my needs. Cheers, -- Fred From sebastian at sipsolutions.net Thu Oct 29 12:11:15 2009 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Thu, 29 Oct 2009 17:11:15 +0100 Subject: [SciPy-User] How Can I Bin A Matrix? In-Reply-To: <5b8d13220910290709n6b7386f5w6d440218eca2ce8c@mail.gmail.com> References: <142682e10910290658n35443430v40925d7ec651706d@mail.gmail.com> <5b8d13220910290709n6b7386f5w6d440218eca2ce8c@mail.gmail.com> Message-ID: <1256832675.4598.3.camel@sebastian> Hi, I didn't test it much, so I hope its not completely wrong, but I think such a thing should do it too with some array reshaping magic, only tried with some simple square arrays though: import numpy as np def bin(a, n): if np.any(np.array(a.shape) % n != 0): raise ValueError('Clipping not supported') if a.ndim != 2: raise ValueError('Only 2D supported here') avg = a.reshape(-1, n).mean(1) avg = avg.reshape(-1, n, a.shape[0]/n).mean(1) return avg Regards, Sebastian On Thu, 2009-10-29 at 23:09 +0900, David Cournapeau wrote: > On Thu, Oct 29, 2009 at 10:58 PM, Joseph Smidt wrote: > > Hello, > > > > Lets pretend I have some random 100x100 matrix and I wanted to > > form a 10x10 matrix where each element of the 10x10 matrix is the > > average of the corresponding 10x10 block of the 100x100 matrix. > > you could simply add submatrices for each item of a submatrix. To take > your example with 2x2 submatrices: > > 0.25 * (x[::2,::2] + x[1::2, ::2] + x[::2, 1::2] + x[1::2,1::2]) > > David > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From josef.pktd at gmail.com Thu Oct 29 12:30:59 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 29 Oct 2009 12:30:59 -0400 Subject: [SciPy-User] How Can I Bin A Matrix? In-Reply-To: <1256832675.4598.3.camel@sebastian> References: <142682e10910290658n35443430v40925d7ec651706d@mail.gmail.com> <5b8d13220910290709n6b7386f5w6d440218eca2ce8c@mail.gmail.com> <1256832675.4598.3.camel@sebastian> Message-ID: <1cd32cbb0910290930m45da6e80k639a44f877ae91fe@mail.gmail.com> On Thu, Oct 29, 2009 at 12:11 PM, Sebastian Berg wrote: > Hi, > > I didn't test it much, so I hope its not completely wrong, but I think > such a thing should do it too with some array reshaping magic, only > tried with some simple square arrays though: > > import numpy as np > > def bin(a, n): > ? ?if np.any(np.array(a.shape) % n != 0): > ? ? ? ?raise ValueError('Clipping not supported') > ? ?if a.ndim != 2: > ? ? ? ?raise ValueError('Only 2D supported here') > > ? ?avg = a.reshape(-1, n).mean(1) > > ? ?avg = avg.reshape(-1, n, a.shape[0]/n).mean(1) > > ? ?return avg > > Regards, > > Sebastian > > On Thu, 2009-10-29 at 23:09 +0900, David Cournapeau wrote: >> On Thu, Oct 29, 2009 at 10:58 PM, Joseph Smidt wrote: >> > Hello, >> > >> > ? ? Lets pretend I have some random 100x100 matrix and I wanted to >> > form a 10x10 matrix where each element of the 10x10 matrix is the >> > average of the corresponding 10x10 ?block of the 100x100 matrix. >> >> you could simply add submatrices for each item of a submatrix. To take >> your example with 2x2 submatrices: >> >> 0.25 * (x[::2,::2] + x[1::2, ::2] + x[::2, 1::2] + x[1::2,1::2]) >> >> David >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > maybe with some linear algebra: >>> a = np.array([( 7, 2, 3, 4 ), ... ( 9, 4, 5, 6 ), ... ( 3, 5, 7, 9 ), ... ( 1, 5, 2, 6 )]) >>> b = np.kron(np.eye(2),np.ones(2)) >>> b array([[ 1., 1., 0., 0.], [ 0., 0., 1., 1.]]) >>> np.dot(b,a) array([[ 16., 6., 8., 10.], [ 4., 10., 9., 15.]]) >>> np.dot(np.dot(b,a),b.T) array([[ 22., 18.], [ 14., 24.]]) >>> >>> np.dot(np.dot(b,a),b.T)/(2.**2) array([[ 5.5, 4.5], [ 3.5, 6. ]]) Josef From kmichael.aye at googlemail.com Thu Oct 29 14:20:29 2009 From: kmichael.aye at googlemail.com (Michael Aye) Date: Thu, 29 Oct 2009 11:20:29 -0700 (PDT) Subject: [SciPy-User] table2.fits anybody? In-Reply-To: References: Message-ID: <69205b8e-d119-4a5d-ad37-e8a60b9f7d2e@u36g2000prn.googlegroups.com> Have received it! Many thanks! Actually, while you are at it: The text of the tutorial is a bit funny at the point when dealing with 'table2.fits', it sounds actually a bit like a hidden joke, which is why I almost was not expecting to find a table2.fits. Here is the text of page 27 of pydatatut.pdf: >>> print tab2 [('M51', 13.5, 2) ('NGC4151', 5.8, 5) ('Crab Nebula', 11.12, 3)] >>> col3 = tab2.field('nobs') But the key 'nobs' does not exist in table2.fits! Honi soit qui mal y pense... ;) Best regards, Michael On Oct 29, 12:38?pm, Perry Greenfield wrote: > I can't recall anyone ever pointing that out before (lots of people do ? > go through the tutorial; I imagine that most just didn't bother to ? > deal with the omission). I will email it to you. Thanks for letting us ? > know. We'll update the downloads as well. > > Perry > > On Oct 28, 2009, at 6:46 PM, Michael Aye wrote: > > > > > > > Hi! > > I am working through the very nice interactive data analyis tutorial > > from here: > >http://www.scipy.org/wikis/topical_software/Tutorial > > but all the data downloads (even the older versions for numarray) are > > missing the table2.fits file. > > > Does anybody have one for me? And also interesting: Am I the only one > > who ever did go through this tutorial or why nobody ever seemed to > > have asked for it? :) > > > Best regards, > > Michael > > _______________________________________________ > > SciPy-User mailing list > > SciPy-U... at scipy.org > >http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From perry at stsci.edu Thu Oct 29 14:41:10 2009 From: perry at stsci.edu (Perry Greenfield) Date: Thu, 29 Oct 2009 14:41:10 -0400 Subject: [SciPy-User] table2.fits anybody? In-Reply-To: <69205b8e-d119-4a5d-ad37-e8a60b9f7d2e@u36g2000prn.googlegroups.com> References: <69205b8e-d119-4a5d-ad37-e8a60b9f7d2e@u36g2000prn.googlegroups.com> Message-ID: <093D85BD-5E71-47EB-AAE7-43177B74DEAE@stsci.edu> On Oct 29, 2009, at 2:20 PM, Michael Aye wrote: > Have received it! Many thanks! > > Actually, while you are at it: > The text of the tutorial is a bit funny at the point when dealing with > 'table2.fits', it sounds actually a bit like a hidden joke, which is > why I almost was not expecting to find a table2.fits. > Here is the text of page 27 of pydatatut.pdf: > >>>> print tab2 > [('M51', 13.5, 2) ('NGC4151', 5.8, 5) ('Crab Nebula', 11.12, 3)] >>>> col3 = tab2.field('nobs') > > But the key 'nobs' does not exist in table2.fits! > > Honi soit qui mal y pense... ;) > > Best regards, > Michael Hmmm, that's odd. I see it. What do you see if you type: >>> tab2._coldefs From randy.groves at boeing.com Thu Oct 29 20:36:22 2009 From: randy.groves at boeing.com (Groves, Randy) Date: Thu, 29 Oct 2009 17:36:22 -0700 Subject: [SciPy-User] Strange installation? Message-ID: So - I'm a SciPy newbie, but not to Python. I've got a Fedora 10, with Python 2.5.2, scipy-0.7.0-2.fc10.i386 installed (scipy using yum). I try the following (never mind the values ...): Python 2.5.2 (r252:60911, Sep 30 2008, 15:41:38) [GCC 4.3.2 20080917 (Red Hat 4.3.2-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> xnew = [1,2,3] >>> ynew = [10, 20, 25] >>> p0 = 17.0 >>> fitfunc = lambda p, x: p - x >>> errfunc = lambda p, x, y: fitfunc(p, x) - y >>> p0 = 17.0 >>> p1, success = scipy.optimize.leastsq(errfunc, p0, args=(xnew, ynew)) Traceback (most recent call last): File "", line 1, in AttributeError: 'module' object has no attribute 'optimize' >>> If I then run the unit tests with scipy.test(), afterwards the call to scipy.optimize.leastsq is just fine. dir(scipy) before and after the unit test confirms that optimize is not available. Just also determined that if I use: import scipy.optimize as op I can see the optimize package just fine. -randy -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Oct 29 21:27:02 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 29 Oct 2009 21:27:02 -0400 Subject: [SciPy-User] Strange installation? In-Reply-To: References: Message-ID: <1cd32cbb0910291827k5be944f6i5c9aa200dff5b328@mail.gmail.com> On Thu, Oct 29, 2009 at 8:36 PM, Groves, Randy wrote: > So ? I?m a SciPy newbie, but not to Python. > > > > I?ve got a Fedora 10, with Python 2.5.2, scipy-0.7.0-2.fc10.i386 installed > (scipy using yum). > > > > I try the following (never mind the values ?): > > > > Python 2.5.2 (r252:60911, Sep 30 2008, 15:41:38) > > [GCC 4.3.2 20080917 (Red Hat 4.3.2-4)] on linux2 > > Type "help", "copyright", "credits" or "license" for more information. > >>>> import scipy > >>>> xnew = [1,2,3] > >>>> ynew = [10, 20, 25] > >>>> p0 = 17.0 > >>>> fitfunc = lambda p, x: p - x > >>>> errfunc = lambda p, x, y: fitfunc(p, x) - y > >>>> p0 = 17.0 > >>>> p1, success = scipy.optimize.leastsq(errfunc, p0, args=(xnew, ynew)) > > Traceback (most recent call last): > > ? File "", line 1, in > > AttributeError: 'module' object has no attribute 'optimize' > >>>> > > > > If I then run the unit tests with scipy.test(), afterwards the call to > scipy.optimize.leastsq is just fine. > > > > dir(scipy) before and after the unit test confirms that optimize is not > available. > > > > Just also determined that if I use: > > > > import scipy.optimize as op > > > > I can see the optimize package just fine. That's the way to import it, I usually use from scipy import optimize from scipy import stats Because scipy is pretty large and it takes time to import all of it, subpackages are not automatically loaded. Users can just load the subpackages on demand. Josef > > > -randy > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From emmanuelle.gouillart at normalesup.org Fri Oct 30 03:46:48 2009 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Fri, 30 Oct 2009 08:46:48 +0100 Subject: [SciPy-User] solving large sparse linear system with Laplacian matrix Message-ID: <20091030074648.GA12676@phare.normalesup.org> Dear list, I'm looking for a memory-savvy algorithm in scipy.sparse.linalg, that I could use to solve a very large sparse linear system. The system can be written as AX = B, where A is a symmetric band matrix with non-zero elements on only 4 or 6 upper (and lower) diagonals. The shape of A is NxN where N is very large (up to 10e6, if possible...), the shape of B and X is NxM, where M is much smaller (up to 50, say). Unfortunately, A is symmetric but not positive-definite as the sum of each row and column is null (A is a Laplacian matrix). B is also sparse (it is actually a block of the original Laplacian matrix). What kind of solver implemented in scipy would you recommend, so that I can solve such a system with N as large as possible (on a bottom-end computer with only 2Gb RAM)? Up to now I have used scipy.sparse.linalg.spsolve, and the CSR scipy.sparse format for A and B: this works fine for N as large as 300, but the memory requirements are too high for greater values of N on my computer. Should I use an iterative solver (I'm a complete newbie in linear algebra)? Also, spsolve requires that the right hand side is a flat vector, so I have to solve as many systems as the number of columns in B, which must be highly inefficient... Any way I can solve the "whole" linear system? Any hints will be much appreciated! For those curious about the background, I'm trying to implement Grady's random walker algorithm [1] to segment large 3-D X-ray tomography images. N=l**3 is the number of voxels in the cubic image, and M is the number of regions which I would like to segment. I don't require a very good precision on the elements of the solution X. Thanks in advance, Emmanuelle [1] L. Grady, "Random walks for image segmentation", IEEE Trans. on pattern analysis and machine intelligence, Vol. 28, p. 1768, 2006. From josef.pktd at gmail.com Fri Oct 30 04:25:51 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 30 Oct 2009 04:25:51 -0400 Subject: [SciPy-User] solving large sparse linear system with Laplacian matrix In-Reply-To: <20091030074648.GA12676@phare.normalesup.org> References: <20091030074648.GA12676@phare.normalesup.org> Message-ID: <1cd32cbb0910300125r6852dc39wee2b4e7c9ae909a@mail.gmail.com> On Fri, Oct 30, 2009 at 3:46 AM, Emmanuelle Gouillart wrote: > Dear list, > > I'm looking for a memory-savvy algorithm in scipy.sparse.linalg, that I > could use to solve a very large sparse linear system. The system can be > written as AX = B, where A is a symmetric band matrix with non-zero > elements on only 4 or 6 upper (and lower) diagonals. The shape of A is > NxN where N is very large (up to 10e6, if possible...), the shape of B > and X is NxM, where M is much smaller (up to 50, say). Unfortunately, A > is symmetric but not positive-definite as the sum of each row and column > is null (A is a Laplacian matrix). B is also sparse (it is actually a > block of the original Laplacian matrix). > > What kind of solver implemented in scipy would you recommend, so that I > can solve such a system with N as large as possible (on a bottom-end > computer with only 2Gb RAM)? Up to now I have used > scipy.sparse.linalg.spsolve, and the CSR scipy.sparse format for A and B: > this works fine for N as large as 300, but the memory requirements are > too high for greater values of N on my computer. Should I use an > iterative solver (I'm a complete newbie in linear algebra)? Also, spsolve > requires that the right hand side is a flat vector, so I have to solve as > many systems as the number of columns in B, which must be highly > inefficient... Any way I can solve the "whole" linear system? Any hints > will be much appreciated! I don't know about memory consumption but I think scipy.sparse.linalg.factorized(A) should be more efficient if you need to solve for several right hand sides. Josef > > For those curious about the background, I'm trying to implement Grady's > random walker algorithm [1] to segment large 3-D X-ray tomography images. > N=l**3 is the number of voxels in the cubic image, and M is the number of > regions which I would like to segment. I don't require a very good > precision on the elements of the solution X. > > Thanks in advance, > > Emmanuelle > > [1] L. Grady, "Random walks for image segmentation", IEEE Trans. on > pattern analysis and machine intelligence, Vol. 28, p. 1768, 2006. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From dwf at cs.toronto.edu Fri Oct 30 04:43:23 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 30 Oct 2009 04:43:23 -0400 Subject: [SciPy-User] solving large sparse linear system with Laplacian matrix In-Reply-To: <1cd32cbb0910300125r6852dc39wee2b4e7c9ae909a@mail.gmail.com> References: <20091030074648.GA12676@phare.normalesup.org> <1cd32cbb0910300125r6852dc39wee2b4e7c9ae909a@mail.gmail.com> Message-ID: On 30-Oct-09, at 4:25 AM, josef.pktd at gmail.com wrote: > I don't know about memory consumption but I think > scipy.sparse.linalg.factorized(A) > should be more efficient if you need to solve for several right hand > sides. You may be right but the factorized matrices may be an awful lot denser which could impact her memory requirements. You may have some luck with lgmres and/or bicg, as far as I know neither imposes conditions on positive definiteness. You might also look at http://code.google.com/p/pyamg/ - they have a lot of other methods implemented, possibly some specially optimized for symmetric banded matrices. David From dwf at cs.toronto.edu Fri Oct 30 05:28:13 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 30 Oct 2009 05:28:13 -0400 Subject: [SciPy-User] solving large sparse linear system with Laplacian matrix In-Reply-To: <20091030074648.GA12676@phare.normalesup.org> References: <20091030074648.GA12676@phare.normalesup.org> Message-ID: <9DAB4765-98D8-4502-B223-32785E751C29@cs.toronto.edu> On 30-Oct-09, at 3:46 AM, Emmanuelle Gouillart wrote: > I'm looking for a memory-savvy algorithm in scipy.sparse.linalg, > that I > could use to solve a very large sparse linear system. The system can > be > written as AX = B, where A is a symmetric band matrix with non-zero > elements on only 4 or 6 upper (and lower) diagonals. The shape of A is > NxN where N is very large (up to 10e6, if possible...), the shape of B > and X is NxM, where M is much smaller (up to 50, say). > Unfortunately, A > is symmetric but not positive-definite as the sum of each row and > column > is null (A is a Laplacian matrix). B is also sparse (it is actually a > block of the original Laplacian matrix). I'm not sure they exist in scipy.sparse.linalg, but looking into your problem a little further it appears that several options exist in LAPACK (but are not wrapped in SciPy, yet). Conveniently, they have built-in support for multiple right-hand-sides. Solve a general banded system of linear equations: http://www.netlib.org/lapack/double/dgbsv.f Convert a symmetric banded matrix to symmetric tridiagonal form: http://www.netlib.org/lapack/double/dsbtrd.f The tridiagonal form is useful because you can then use this specialized routine for solving symmetric tridiagonal systems: http://www.netlib.org/lapack/double/dptsv.f So, if both memory and performance are at issue, I'd recommend looking into wrapping dsbtrd and dptsv with f2py (these are for doubles, hence the 'd' - if you're interested in single precision use the 's' versions). Note that they require you pass in the band matrix in a particular compressed format, which is kid of a pain, but on the other hand _will_ save you space over CSR or anything like that. The layout you need to use for symmetric band matrices is near the bottom of this page: http://www.netlib.org/lapack/lug/node124.html Hope this helps, David From gael.varoquaux at normalesup.org Fri Oct 30 05:46:35 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 30 Oct 2009 10:46:35 +0100 Subject: [SciPy-User] solving large sparse linear system with Laplacian matrix In-Reply-To: <9DAB4765-98D8-4502-B223-32785E751C29@cs.toronto.edu> References: <20091030074648.GA12676@phare.normalesup.org> <9DAB4765-98D8-4502-B223-32785E751C29@cs.toronto.edu> Message-ID: <20091030094635.GB16315@phare.normalesup.org> On Fri, Oct 30, 2009 at 05:28:13AM -0400, David Warde-Farley wrote: > I'm not sure they exist in scipy.sparse.linalg, but looking into your > problem a little further it appears that several options exist in > LAPACK (but are not wrapped in SciPy, yet). Conveniently, they have > built-in support for multiple right-hand-sides. Ha, I didn't know that LAPACK had support for sparse representations of multi-diagonal matrices. Cool! > Solve a general banded system of linear equations: > http://www.netlib.org/lapack/double/dgbsv.f > Convert a symmetric banded matrix to symmetric tridiagonal form: > http://www.netlib.org/lapack/double/dsbtrd.f > The tridiagonal form is useful because you can then use this > specialized routine for solving symmetric tridiagonal systems: > http://www.netlib.org/lapack/double/dptsv.f > So, if both memory and performance are at issue, I'd recommend looking > into wrapping dsbtrd and dptsv with f2py (these are for doubles, hence > the 'd' - if you're interested in single precision use the 's' > versions). Note that they require you pass in the band matrix in a > particular compressed format, which is kid of a pain, but on the other > hand _will_ save you space over CSR or anything like that. The layout > you need to use for symmetric band matrices is near the bottom of this > page: > http://www.netlib.org/lapack/lug/node124.html Any chance there might be something useful in pysparse? That would remove the wrapping work. Ga?l From pav+sp at iki.fi Fri Oct 30 05:50:59 2009 From: pav+sp at iki.fi (Pauli Virtanen) Date: Fri, 30 Oct 2009 09:50:59 +0000 (UTC) Subject: [SciPy-User] solving large sparse linear system with Laplacian matrix References: <20091030074648.GA12676@phare.normalesup.org> <1cd32cbb0910300125r6852dc39wee2b4e7c9ae909a@mail.gmail.com> Message-ID: Fri, 30 Oct 2009 04:43:23 -0400, David Warde-Farley wrote: > On 30-Oct-09, at 4:25 AM, josef.pktd at gmail.com wrote: >> I don't know about memory consumption but I think >> scipy.sparse.linalg.factorized(A) >> should be more efficient if you need to solve for several right hand >> sides. > > You may be right but the factorized matrices may be an awful lot denser > which could impact her memory requirements. > > You may have some luck with lgmres and/or bicg, as far as I know neither > imposes conditions on positive definiteness. I remember there's also a block-version of LGMRES that's expected to be somehow more efficient with multiple right-hand sides. The implementation in scipy is for the plain version, however, and IIRC handles only one rhs at a time. I guess you can already do something similar: outer_v = [] r = zeros_like(B) for j, b in enumerate(B.T): r[j] = scipy.sparse.linalg.lgmres(A, b, inner_m=6, tol=1e-6, inner_m=10, outer_k=6, outer_v=outer_v) Ie. store part of the old solution subspace, to accelerate solution of the next linear systems. This might (I'm not really sure) help if the right-hand side vectors are in some sense "similar" to each other. In addition to scipy.sparse.linalg.lgmres, you can also try plain GMRES (scipy.sparse.linalg.gmres) -- if it converges, it might be faster, since the linear algebra part of current LGMRES implementation in Scipy is not fully optimized and may become slow if you choose to use huge inner subspaces. -- Pauli Virtanen From pav+sp at iki.fi Fri Oct 30 05:56:49 2009 From: pav+sp at iki.fi (Pauli Virtanen) Date: Fri, 30 Oct 2009 09:56:49 +0000 (UTC) Subject: [SciPy-User] solving large sparse linear system with Laplacian matrix References: <20091030074648.GA12676@phare.normalesup.org> <9DAB4765-98D8-4502-B223-32785E751C29@cs.toronto.edu> Message-ID: Fri, 30 Oct 2009 05:28:13 -0400, David Warde-Farley wrote: [clip] > I'm not sure they exist in scipy.sparse.linalg, but looking into your > problem a little further it appears that several options exist in LAPACK > (but are not wrapped in SciPy, yet). Conveniently, they have built-in > support for multiple right-hand-sides. > > Solve a general banded system of linear equations: > http://www.netlib.org/lapack/double/dgbsv.f This is available as scipy.linalg.solve_banded, but I guess the rest of the gang is missing. (They'd be great to have *wink wink nudge nugde* :) -- Pauli Virtanen From dwf at cs.toronto.edu Fri Oct 30 06:38:26 2009 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 30 Oct 2009 06:38:26 -0400 Subject: [SciPy-User] solving large sparse linear system with Laplacian matrix In-Reply-To: References: <20091030074648.GA12676@phare.normalesup.org> <9DAB4765-98D8-4502-B223-32785E751C29@cs.toronto.edu> Message-ID: <5393FBC6-E8A7-49E9-9B5B-37047540E94B@cs.toronto.edu> On 30-Oct-09, at 5:56 AM, Pauli Virtanen wrote: > Fri, 30 Oct 2009 05:28:13 -0400, David Warde-Farley wrote: > [clip] >> I'm not sure they exist in scipy.sparse.linalg, but looking into your >> problem a little further it appears that several options exist in >> LAPACK >> (but are not wrapped in SciPy, yet). Conveniently, they have built-in >> support for multiple right-hand-sides. >> >> Solve a general banded system of linear equations: >> http://www.netlib.org/lapack/double/dgbsv.f > > This is available as scipy.linalg.solve_banded, but I guess the rest > of > the gang is missing. (They'd be great to have *wink wink nudge > nugde* :) Oh, duh. I was looking in scipy.linalg.fblas when I should've been looking in scipy.linalg.flapack. David From kmichael.aye at googlemail.com Fri Oct 30 06:42:26 2009 From: kmichael.aye at googlemail.com (Michael Aye) Date: Fri, 30 Oct 2009 03:42:26 -0700 (PDT) Subject: [SciPy-User] table2.fits anybody? In-Reply-To: <093D85BD-5E71-47EB-AAE7-43177B74DEAE@stsci.edu> References: <69205b8e-d119-4a5d-ad37-e8a60b9f7d2e@u36g2000prn.googlegroups.com> <093D85BD-5E71-47EB-AAE7-43177B74DEAE@stsci.edu> Message-ID: On Oct 29, 7:41?pm, Perry Greenfield wrote: > On Oct 29, 2009, at 2:20 PM, Michael Aye wrote: > > > > > > > Have received it! Many thanks! > > > Actually, while you are at it: > > The text of the tutorial is a bit funny at the point when dealing with > > 'table2.fits', it sounds actually a bit like a hidden joke, which is > > why I almost was not expecting to find a table2.fits. > > Here is the text of page 27 of pydatatut.pdf: > > >>>> print tab2 > > [('M51', 13.5, 2) ('NGC4151', 5.8, 5) ('Crab Nebula', 11.12, 3)] > >>>> col3 = tab2.field('nobs') > > > But the key 'nobs' does not exist in table2.fits! > > > Honi soit qui mal y pense... ;) > > > Best regards, > > Michael > > Hmmm, that's odd. I see it. What do you see if you type: > > ?>>> tab2._coldefs > Ah, sorry, tried it with tab instead of tab2. Ok, then there is only one thing left to correct in the pdf: You replaced the 3 in col3 with 99 but pasted the same print-out line of tab2 twice: [('M51', 13.5, 2) ('NGC4151', 5.8, 5) ('Crab Nebula', 11.12, 3)] So that was a bit confusing, because we should see the 99 at the end now. But going through the motions myself it works fine and I see the change. Thanks very much, this tutorial is really fun! Best regards, Michael From emmanuelle.gouillart at normalesup.org Fri Oct 30 11:36:22 2009 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Fri, 30 Oct 2009 16:36:22 +0100 Subject: [SciPy-User] solving large sparse linear system with Laplacian matrix In-Reply-To: <1cd32cbb0910300125r6852dc39wee2b4e7c9ae909a@mail.gmail.com> References: <20091030074648.GA12676@phare.normalesup.org> <1cd32cbb0910300125r6852dc39wee2b4e7c9ae909a@mail.gmail.com> Message-ID: <20091030153622.GA7507@phare.normalesup.org> Hello, thank you very much for all your answers! Josef, the scipy.sparse.linalg.factorized(A) tip works like a charm for solving efficiently for all right hand sides -- and the memory requirement is comparable to scipy.sparse.linalg.spsolve, so I guess the factorized matrices are not very dense... I can solve my system with N=10^6 in a few tens of seconds, which is really cool! For more speed and memory improvements, it will take me some time to benchmark all the proposed solutions, so I shall report later on the solution that works best for me. Joachim Dahl mentioned off-list that the cvxopt package wraps Lapack's gbsv routine for banded matrices, which might also be an option (and it also wraps other functions mentioned by David such as ptsv). I will first try scipy's solutions such as lgmres and bicg. Thanks a lot for your help, (and more later) Emmanuelle > I don't know about memory consumption but I think > scipy.sparse.linalg.factorized(A) > should be more efficient if you need to solve for several right hand sides. > Josef > > For those curious about the background, I'm trying to implement Grady's > > random walker algorithm [1] to segment large 3-D X-ray tomography images. > > N=l**3 is the number of voxels in the cubic image, and M is the number of > > regions which I would like to segment. I don't require a very good > > precision on the elements of the solution X. > > Thanks in advance, > > Emmanuelle > > [1] L. Grady, "Random walks for image segmentation", IEEE Trans. on > > pattern analysis and machine intelligence, Vol. 28, p. 1768, 2006. > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From josef.pktd at gmail.com Fri Oct 30 11:45:08 2009 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 30 Oct 2009 11:45:08 -0400 Subject: [SciPy-User] solving large sparse linear system with Laplacian matrix In-Reply-To: <20091030153622.GA7507@phare.normalesup.org> References: <20091030074648.GA12676@phare.normalesup.org> <1cd32cbb0910300125r6852dc39wee2b4e7c9ae909a@mail.gmail.com> <20091030153622.GA7507@phare.normalesup.org> Message-ID: <1cd32cbb0910300845se53cef4m685189e6e32bb762@mail.gmail.com> On Fri, Oct 30, 2009 at 11:36 AM, Emmanuelle Gouillart wrote: > > ? ? ? ?Hello, > > ? ? ? ?thank you very much for all your answers! > > ? ? ? ?Josef, the scipy.sparse.linalg.factorized(A) tip works like a > charm for solving efficiently for all right hand sides -- and the memory > requirement is comparable to scipy.sparse.linalg.spsolve, so I guess the > factorized matrices are not very dense... I can solve my system with > N=10^6 in a few tens of seconds, which is really cool! If this works, you could also try scipy.sparse.linalg.splu Based on the description, it works for square matrices, so maybe it is more optimized for your case than factorized. There is also umfpack, but since I don't have it, I don't know what's in there. Josef > > ? ? ? ?For more speed and memory improvements, it will take me some time > to benchmark all the proposed solutions, so I shall report later on the > solution that works best for me. Joachim Dahl mentioned off-list that the > cvxopt package wraps Lapack's gbsv routine for banded matrices, which > might also be an option (and it also wraps other functions mentioned by > David such as ptsv). I will first try scipy's solutions such as lgmres > and bicg. > > ? ? ? ?Thanks a lot for your help, (and more later) > > ? ? ? ?Emmanuelle > >> I don't know about memory consumption but I think >> scipy.sparse.linalg.factorized(A) >> should be more efficient if you need to solve for several right hand sides. > >> Josef > > >> > For those curious about the background, I'm trying to implement Grady's >> > random walker algorithm [1] to segment large 3-D X-ray tomography images. >> > N=l**3 is the number of voxels in the cubic image, and M is the number of >> > regions which I would like to segment. I don't require a very good >> > precision on the elements of the solution X. > >> > Thanks in advance, > >> > Emmanuelle > >> > [1] L. Grady, "Random walks for image segmentation", IEEE Trans. on >> > pattern analysis and machine intelligence, Vol. 28, p. 1768, 2006. > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cimrman3 at ntc.zcu.cz Fri Oct 30 12:13:31 2009 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 30 Oct 2009 17:13:31 +0100 Subject: [SciPy-User] solving large sparse linear system with Laplacian matrix In-Reply-To: <1cd32cbb0910300845se53cef4m685189e6e32bb762@mail.gmail.com> References: <20091030074648.GA12676@phare.normalesup.org> <1cd32cbb0910300125r6852dc39wee2b4e7c9ae909a@mail.gmail.com> <20091030153622.GA7507@phare.normalesup.org> <1cd32cbb0910300845se53cef4m685189e6e32bb762@mail.gmail.com> Message-ID: <4AEB10AB.8020109@ntc.zcu.cz> josef.pktd at gmail.com wrote: > On Fri, Oct 30, 2009 at 11:36 AM, Emmanuelle Gouillart > wrote: >> Hello, >> >> thank you very much for all your answers! >> >> Josef, the scipy.sparse.linalg.factorized(A) tip works like a >> charm for solving efficiently for all right hand sides -- and the memory >> requirement is comparable to scipy.sparse.linalg.spsolve, so I guess the >> factorized matrices are not very dense... I can solve my system with >> N=10^6 in a few tens of seconds, which is really cool! > > If this works, you could also try > > scipy.sparse.linalg.splu > > Based on the description, it works for square matrices, so maybe it is more > optimized for your case than factorized. > > There is also umfpack, but since I don't have it, I don't know what's in there. FYI: both splu() and factorized() use umfpack, when available... r. From nbwood at lamar.colostate.edu Fri Oct 30 13:24:09 2009 From: nbwood at lamar.colostate.edu (Norm Wood) Date: Fri, 30 Oct 2009 11:24:09 -0600 Subject: [SciPy-User] scipy.linalg.det TypeError In-Reply-To: <45d1ab480910290158u7d274687t737a27690fd08497@mail.gmail.com> References: <20091028190701.GA30122@wombat.atmos.colostate.edu> <45d1ab480910290158u7d274687t737a27690fd08497@mail.gmail.com> Message-ID: <20091030172409.GA1977@wombat.atmos.colostate.edu> On 29 Oct., David Goldsmith wrote: > Uncertain why you're having a problem - your sample code works for me: > > >>> import scipy.linalg > >>> import numpy.linalg > >>> A=np.matrix([[1.1, 1.9],[1.9,3.5]]) > >>> y = numpy.linalg.det(A); y > 0.23999999999999988 > >>> y = scipy.linalg.det(A); y > 0.23999999999999988 > >>> scipy.__version__ > '0.7.1' > >>> np.__version__ > '1.3.0rc2' > Python 2.5 on Windoze Vista HPE Thanks for checking, David. I'll have to take a closer look at how the "get_flinalg_funcs" procedure works, and will probably try rebuilding & reinstalling LAPACK, ATLAS, numpy and scipy from scratch. Norm From guyer at nist.gov Fri Oct 30 14:13:03 2009 From: guyer at nist.gov (Jonathan Guyer) Date: Fri, 30 Oct 2009 14:13:03 -0400 Subject: [SciPy-User] solving large sparse linear system with Laplacian matrix In-Reply-To: <20091030094635.GB16315@phare.normalesup.org> References: <20091030074648.GA12676@phare.normalesup.org> <9DAB4765-98D8-4502-B223-32785E751C29@cs.toronto.edu> <20091030094635.GB16315@phare.normalesup.org> Message-ID: On Oct 30, 2009, at 5:46 AM, Gael Varoquaux wrote: > Any chance there might be something useful in pysparse? That would > remove > the wrapping work. I don't think PySparse has any facilities for multiple RHS. Don't know about PyTrilinos. From gael.varoquaux at normalesup.org Fri Oct 30 14:35:25 2009 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 30 Oct 2009 19:35:25 +0100 Subject: [SciPy-User] solving large sparse linear system with Laplacian matrix In-Reply-To: References: <20091030074648.GA12676@phare.normalesup.org> <9DAB4765-98D8-4502-B223-32785E751C29@cs.toronto.edu> <20091030094635.GB16315@phare.normalesup.org> Message-ID: <20091030183525.GH16315@phare.normalesup.org> On Fri, Oct 30, 2009 at 02:13:03PM -0400, Jonathan Guyer wrote: > On Oct 30, 2009, at 5:46 AM, Gael Varoquaux wrote: > > Any chance there might be something useful in pysparse? That would > > remove > > the wrapping work. > I don't think PySparse has any facilities for multiple RHS. Don't know > about PyTrilinos. Hi Jon, The problem Emmanuelle has here really is solving an anistropic diffusion equation on a big grid. This can be seen as a PDE problem. Do you have any suggestions on good linear algebra or iterative options to do this with Python? We have not looked at FiPy so far, as it seems tedious to formulate the problem in PDE terms. Cheers, Ga?l From vanforeest at gmail.com Fri Oct 30 17:26:14 2009 From: vanforeest at gmail.com (nicky van foreest) Date: Fri, 30 Oct 2009 22:26:14 +0100 Subject: [SciPy-User] solving large sparse linear system with Laplacian matrix In-Reply-To: <20091030074648.GA12676@phare.normalesup.org> References: <20091030074648.GA12676@phare.normalesup.org> Message-ID: Hi, Would pysparse be an option perhaps? I did some tests with scipy.sparse and pysparse, and the latter tends to build the matrices (much) faster (for my cases, that is). It has been some time ago that I installed pysparse, so I might be mistaken, but pysparse contains also umfpack and superlu, which are compiled in the process if they are not found on your system. bye Nicky 2009/10/30 Emmanuelle Gouillart : > Dear list, > > I'm looking for a memory-savvy algorithm in scipy.sparse.linalg, that I > could use to solve a very large sparse linear system. The system can be > written as AX = B, where A is a symmetric band matrix with non-zero > elements on only 4 or 6 upper (and lower) diagonals. The shape of A is > NxN where N is very large (up to 10e6, if possible...), the shape of B > and X is NxM, where M is much smaller (up to 50, say). Unfortunately, A > is symmetric but not positive-definite as the sum of each row and column > is null (A is a Laplacian matrix). B is also sparse (it is actually a > block of the original Laplacian matrix). > > What kind of solver implemented in scipy would you recommend, so that I > can solve such a system with N as large as possible (on a bottom-end > computer with only 2Gb RAM)? Up to now I have used > scipy.sparse.linalg.spsolve, and the CSR scipy.sparse format for A and B: > this works fine for N as large as 300, but the memory requirements are > too high for greater values of N on my computer. Should I use an > iterative solver (I'm a complete newbie in linear algebra)? Also, spsolve > requires that the right hand side is a flat vector, so I have to solve as > many systems as the number of columns in B, which must be highly > inefficient... Any way I can solve the "whole" linear system? Any hints > will be much appreciated! > > For those curious about the background, I'm trying to implement Grady's > random walker algorithm [1] to segment large 3-D X-ray tomography images. > N=l**3 is the number of voxels in the cubic image, and M is the number of > regions which I would like to segment. I don't require a very good > precision on the elements of the solution X. > > Thanks in advance, > > Emmanuelle > > [1] L. Grady, "Random walks for image segmentation", IEEE Trans. on > pattern analysis and machine intelligence, Vol. 28, p. 1768, 2006. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From guyer at nist.gov Fri Oct 30 17:35:50 2009 From: guyer at nist.gov (Jonathan Guyer) Date: Fri, 30 Oct 2009 17:35:50 -0400 Subject: [SciPy-User] solving large sparse linear system with Laplacian matrix In-Reply-To: <20091030183525.GH16315@phare.normalesup.org> References: <20091030074648.GA12676@phare.normalesup.org> <9DAB4765-98D8-4502-B223-32785E751C29@cs.toronto.edu> <20091030094635.GB16315@phare.normalesup.org> <20091030183525.GH16315@phare.normalesup.org> Message-ID: <26AD7B0C-E1A5-461C-9291-060801A5AEFF@nist.gov> On Oct 30, 2009, at 2:35 PM, Gael Varoquaux wrote: > The problem Emmanuelle has here really is solving an anistropic > diffusion > equation on a big grid. This can be seen as a PDE problem. Sure, that's what it sounded like. > Do you have > any suggestions on good linear algebra or iterative options to do this > with Python? The suggestions that have already been offered would be what I would try; either GMRES (although that's broken in PySparse) or PyAMG. Depending on the problem, PyTrilinos might offer some advantageous preconditioners, but at substantial up-front installation cost. > We have not looked at FiPy so far, as it seems tedious to > formulate the problem in PDE terms. If you choose to, an anisotropic diffusion equation is no big deal (we've got an example in the manual). N=1e6 is also no challenge, although because FiPy is something of a memory hog, that would be pushing the capacity of a 2 gig machine. We don't have any facilities for multiple RHS vectors, though. From tpk at kraussfamily.org Fri Oct 30 22:43:09 2009 From: tpk at kraussfamily.org (Tom K.) Date: Fri, 30 Oct 2009 19:43:09 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] least-square filter design In-Reply-To: References: Message-ID: <26139404.post@talk.nabble.com> Neal Becker wrote: > > Anyone have code for least square (minimum mean square error) FIR filter > design? > Could you be a little more specific? scipy.signal.firwin almost designs a least square low pass FIR filter if you use a rectangular window (I say almost because like other packages the filter's response is normalized to unity at DC so technically it is not least squares although the difference is slight and decreases with increasing filter order). Do you need a transition band? What type of FIR filter: lowpass, highpass, bandpass, bandstop, or multiband? Are discrete samples OK, or do you need a continuous band (or set of bands)? Which type of filter - is symmetric OK, or do you need antisymmetric? Or, are you talking about an adaptive filter? -- View this message in context: http://old.nabble.com/least-square-filter-design-tp26083443p26139404.html Sent from the Scipy-User mailing list archive at Nabble.com. From robfalck at gmail.com Sat Oct 31 09:02:25 2009 From: robfalck at gmail.com (Rob Falck) Date: Sat, 31 Oct 2009 09:02:25 -0400 Subject: [SciPy-User] using scipy.optimize.fmin_slsqp and setting bounds=(None, None) In-Reply-To: <1cd32cbb0910200713m60c8761cn3063555ea7710374@mail.gmail.com> References: <961987.41200.qm@web45908.mail.sp1.yahoo.com> <1cd32cbb0910200713m60c8761cn3063555ea7710374@mail.gmail.com> Message-ID: I have a fix for this which changes the default bounds from +/- 1E12 to numpy.finfo(float).max and numpy.finfo(float).min. It also now detects None and inf and replaces them with max or min, depending on whether they are used in the upper or lower bound. I'll try to get it into the system this weekend. On Tue, Oct 20, 2009 at 10:13 AM, wrote: > On Mon, Oct 19, 2009 at 5:46 PM, Peter Halverson > wrote: > > I'm not sure if this is user error or an actual bug. When I attempt to > set > > my bounds in fmin_slsqp the option bounds =[(-10,10),(0,None)] is not > > recognized Scipy crashes with > > > > IndexError Traceback (most recent call > last) > > > > C:\Documents and Settings\All Users\Documents\Python\ in > > () > > > > C:\Python25\lib\site-packages\scipy\optimize\slsqp.pyc in > fmin_slsqp(func, > > x0, eqcons, f_eqcons, ieqcons, f_ieqcons, bounds, fprime, fprime_eqcons, > > fprime_ieqcons, args, iter, acc, iprint, full_output, epsilon) > > 244 if bounds[i][0] > bounds[i][1]: > > 245 raise ValueError, \ > > --> 246 'SLSQP Error: lb > ub in bounds[' + str(i) +'] ' > + > > str(bounds[4]) > > 247 > > 248 xl = array( [ b[0] for b in bounds ] ) > > > > IndexError: list index out of range > > > > My code: > > > > import numpy as np > > import scipy as sp > > from scipy.optimize import fmin_slsqp as fmincon > > > > #The purpose of this script is > > # minimize x0+x1^2 > > # where a and b are constants > > # and 0 > # and x1-x2>0 > > > > def fitfun(x): > > t=x[0]+x[1]**2 > > return t > > > > def confun(x): > > return (x[0]-x[1]) > > > > bnds =[(-10,10),(0,None)] > > guess=[0.5,0.5] > > > > fmincon(fitfun,guess,ieqcons=[confun],bounds=bnds) > > > > If this is a bug I will report it to the appropriate places. If its not a > > bug please help me sort it out. > > > > I think bounds are just assumed to be finite (``inf`` doesn't work either) > > If you set the bounds to a number that is unlikely to be binding, > then it works. > > For example, with > > bnds =[(-10,10),(0,1000)] > > your example and several variations of it with different constraints > binding, > that I tried, work without problems. > > Josef > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- - Rob Falck -------------- next part -------------- An HTML attachment was scrubbed... URL: