From ndbecker2 at gmail.com Tue Jul 3 07:32:52 2012 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 03 Jul 2012 07:32:52 -0400 Subject: [SciPy-User] power series curve fit Message-ID: I'm trying to find a power series fit: My obective function is to minimize ||y - \sum k_i x^i|| Here the coefs k_i are complex. I am trying slsqp, but I'm just wondering if there is a more direct or efficient approach. From alejandro.weinstein at gmail.com Tue Jul 3 08:13:40 2012 From: alejandro.weinstein at gmail.com (Alejandro Weinstein) Date: Tue, 3 Jul 2012 06:13:40 -0600 Subject: [SciPy-User] power series curve fit In-Reply-To: References: Message-ID: On Tue, Jul 3, 2012 at 5:32 AM, Neal Becker wrote: > I am trying slsqp, but I'm just wondering if there is a more direct or efficient > approach. This page contains some code to fit power laws to data: http://tuvalu.santafe.edu/~aaronc/powerlaws/ The paper related to this code is a good reading (there is a link to the paper in the same page). It states that many times it is not a good idea to model your data by a power law. Alejandro. From m.boumans at gmx.net Mon Jul 2 15:41:32 2012 From: m.boumans at gmx.net (Marcus Boumans) Date: Mon, 02 Jul 2012 21:41:32 +0200 Subject: [SciPy-User] Question regarding UnivariateSpline and splrep Message-ID: <4FF1F96C.3090407@gmx.net> Hello, I want to calculate the derivates of a spline by using the features of scipy.interpolate I have found 2 possible ways: The one with the oo approach s_long = UnivariateSpline(distance, gtaTrack_Container.reftrack.longitude, w=None, bbox=[None, None], k=2, s=0) long_der=s_long.derivatives(dist_spline) The other one with the procedural approach tck_long = splrep(distance, gtaTrack_Container.reftrack.longitude) long_der = spalde(dist_spline, tck_long) My questions: When using the oo approach with dist_spline is an array I get back an array with only 3 elements. I assume those values are the first to nth derivatives of element dist_spline[0]!? I this correct? Using the procedural approach with dist_spline is an array I get back an array with 3 colums and len(dist_spline) rows. I assume those values are the first to nth derivate of each element of dist_spline[0]!? That is what I would need. I this correct? Can anybody give a comment on that different behavior. Am I to use a for loop to get the derivatives of all elements of dist_spline with the oo approach? I think it is worth to give a little more details in the help to the derivates method of UnivariateSpline. Regs Marcus From mdekauwe at gmail.com Tue Jul 3 19:30:55 2012 From: mdekauwe at gmail.com (mdekauwe) Date: Tue, 3 Jul 2012 16:30:55 -0700 (PDT) Subject: [SciPy-User] Re[SciPy-user] carrays loops and headers?? Message-ID: <34110626.post@talk.nabble.com> Hi, I have three problems which I can't quite seem to solve when using recarrays. 1). If I read some data from a csv file like so... import numpy as np # data look like... #Col1 Col2 Col3 Col4 Fruit Month Day #59.9541 2.1631 37.8446 3.5895 Plum Aug 1 #46.7951 1.8935 24.3321 2.0352 Plum Aug 5 #35.3719 1.8274 14.5228 1.3906 Pear Aug 8 #74.5296 1.9938 60.7940 4.2176 Pear Aug 9 data_all = np.recfromcsv("results.txt", delimiter=" ", names=True, case_sensitive=True) # data look like.... # Col1 Col2 Col3 Col4 Fruit Month Day # 59.9541 2.1631 37.8446 3.5895 Plum Aug 1 # 74.5296 1.9938 60.7940 4.2176 Pear Aug 9 data_25 = np.recfromcsv("data_at_25.txt", delimiter=" ", names=True, case_sensitive=True) And then query data_25.shape print data_25.shape (2,) it returns only the number of rows, rather than the number of columns and rows. Is there a way round this?! As this means I can't for example reshape the array can i? 2). If I wanted to check the index matching the smaller array and use the matching column value to perform a calculation with the larger array is there a better way than two loops? e.g. vnorm = np.ones(len(data_all)) * -9999. jnorm = np.ones(len(data_all)) * -9999. for i in xrange(len(data_all)): for j in xrange(len(data_25)): if ((data_25["Fruit"][j] == data_all["Fruit"][i]) and (data_25["Month"][j] == data_all["Month"][i]) and (data_25["Day"][j] == data_all["Day"][i])): v25 = data_25["Col1"][j] j25 = data_25["Col2"][j] vnorm[i] = data_all["Col1"][i] / v25 jnorm[i] = data_all["Col2"][i] / j25 3). Is there a way to join the vnorm and jnorm to the data_all array to easily write it as a CSV file e.g. using import matplotlib.mlab as mlab data_all = np.hstack((data_all, vnorm)) f = open("crap.txt", "w") mlab.rec2csv(data_all, f, delimiter=" ") f.close() I think this doesn't work because vnorm doesn't have a header? return _nx.concatenate(map(atleast_1d,tup),1) TypeError: invalid type promotion The only way I seem to be able to get what I want is to explitly loop over the rows and columns again and print the outputs. thanks, Martin -- View this message in context: http://old.nabble.com/Recarrays-loops-and-headers---tp34110626p34110626.html Sent from the Scipy-User mailing list archive at Nabble.com. From silva at lma.cnrs-mrs.fr Wed Jul 4 07:07:15 2012 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Wed, 04 Jul 2012 13:07:15 +0200 Subject: [SciPy-User] ODE integration : pre allocation Message-ID: <1341400035.4164.11.camel@amilo.coursju> Hi folks, I was wondering if the ode integrators in scipy.integrate.ode handle the pre-allocation of the result of the rhs (i.e. y'=f(y,t) ). A creation/allocation of the result array may lead to a strong performance penalty, am I wrong ? Is there a way to provide a reference to a pre-allocated array that would only have to be filled ? For example, the callable signature would be : f(t, y, y', *f_args) where t, y and f_args are inputs, and y' is the output. When calling the callable, y' would be a reference to the result array to be filled. Any thought? -- Fabrice Silva From alec.kalinin at gmail.com Wed Jul 4 09:37:10 2012 From: alec.kalinin at gmail.com (Alexander Kalinin) Date: Wed, 4 Jul 2012 17:37:10 +0400 Subject: [SciPy-User] NumPy array slicing: take all rows but several columns from the array Message-ID: Hello, I am trying to find the best way of slicing NumPy array. I need to extract all rows but several columns from the array. For example, I want to fill with 1.0 only 0, 2, 5 columns of the array. But the following code does not work: import numpy as np A = np.zeros((10, 7)) A[:, 0, 2, 5] = 1.0 # does not work What is the best way to do it? Sincerely, Alexander -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at mhelm.de Wed Jul 4 09:44:41 2012 From: martin at mhelm.de (Martin Helm) Date: Wed, 04 Jul 2012 15:44:41 +0200 Subject: [SciPy-User] NumPy array slicing: take all rows but several columns from the array In-Reply-To: References: Message-ID: <4FF448C9.3020608@mhelm.de> Am 04.07.2012 15:37, schrieb Alexander Kalinin: > import numpy as np > A = np.zeros((10, 7)) > A[:, 0, 2, 5] = 1.0 # does not work Use A[:, [0, 2, 5]] = 1.0 as you do it you index a 4 dimensional array From alec.kalinin at gmail.com Wed Jul 4 09:46:55 2012 From: alec.kalinin at gmail.com (Alexander Kalinin) Date: Wed, 4 Jul 2012 17:46:55 +0400 Subject: [SciPy-User] NumPy array slicing: take all rows but several columns from the array In-Reply-To: References: Message-ID: I found an error. I need to use a tuple of indexes. The following code did what I want: import numpy as np A = np.zeros((10, 7)) A[:, (0, 2, 5)] = 1.0 Alexander. On Wed, Jul 4, 2012 at 5:37 PM, Alexander Kalinin wrote: > Hello, > > I am trying to find the best way of slicing NumPy array. I need to extract > all rows but several columns from the array. For example, I want to fill > with 1.0 only 0, 2, 5 columns of the array. But the following code does not > work: > > import numpy as np > A = np.zeros((10, 7)) > A[:, 0, 2, 5] = 1.0 # does not work > > What is the best way to do it? > > Sincerely, > Alexander > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mogliii at gmx.net Wed Jul 4 12:32:50 2012 From: mogliii at gmx.net (mogliii) Date: Wed, 04 Jul 2012 17:32:50 +0100 Subject: [SciPy-User] scipy.odr question Message-ID: <4FF47032.6090305@gmx.net> I tried to fit a lorentzian to some code. Below is my working example. Strangely for the odr-fitting the result are far off, even though the starting values are very reasonable. I also attached a plot of it. What is going wrong with odr fit? I'm looking into odr because i get better fit results with peak-o-mat (which apparently uses odr) than with leastsq or mpfit. See also my question on stackoverflow http://stackoverflow.com/questions/11330131/difference-between-levenberg-marquardt-algorithm-and-odr Code: ################### import scipy.odr as odr import numpy as np import matplotlib.pyplot as plt def f(B, x): '''Lorentzian with offset''' return B[0] / (1 + np.power((x - B[1]) / (B[2] / 2), 2)) + B[3] linear = odr.Model(f) data = np.loadtxt('constraint.dat') # read data x = data[:, 0] y = data[:, 1] mydata = odr.RealData(x, y, sx = x, sy = y) beta0 = [2500., 2440., 18., 4500.] myodr = odr.ODR(mydata, linear, beta0 = beta0, maxit = 2000) myoutput1 = myodr.run() # odr fit myodr.set_job(fit_type = 2) # leastsq fit (?) myoutput2 = myodr.run() plt.plot(x, y) # raw data plt.plot(x, f(beta0, x)) # start values plt.plot(x, f(myoutput1.beta, x)) # fit 1 plt.plot(x, f(myoutput2.beta, x)) # fit 2 plt.legend(('raw data', 'start values', 'odr fit', 'leastsq fit')) #plt.savefig('odrfit.png', dpi = 90) plt.show() -------------- next part -------------- 2.769019999999999982e+03 4.771689999999999600e+03 2.768119999999999891e+03 4.914979999999999563e+03 2.767210000000000036e+03 4.761829999999999927e+03 2.766309999999999945e+03 4.885689999999999600e+03 2.765409999999999854e+03 4.915359999999999673e+03 2.764510000000000218e+03 4.975470000000000255e+03 2.763599999999999909e+03 4.847319999999999709e+03 2.762699999999999818e+03 4.932340000000000146e+03 2.761800000000000182e+03 4.665869999999999891e+03 2.760889999999999873e+03 4.828359999999999673e+03 2.759989999999999782e+03 4.824810000000000400e+03 2.759090000000000146e+03 4.754880000000000109e+03 2.758179999999999836e+03 4.806649999999999636e+03 2.757280000000000200e+03 4.899890000000000327e+03 2.756369999999999891e+03 4.926739999999999782e+03 2.755469999999999800e+03 4.887239999999999782e+03 2.754559999999999945e+03 4.925140000000000327e+03 2.753659999999999854e+03 4.750250000000000000e+03 2.752760000000000218e+03 4.804729999999999563e+03 2.751849999999999909e+03 4.911689999999999600e+03 2.750949999999999818e+03 4.825270000000000437e+03 2.750039999999999964e+03 4.854859999999999673e+03 2.749130000000000109e+03 4.826460000000000036e+03 2.748230000000000018e+03 4.842239999999999782e+03 2.747320000000000164e+03 4.805569999999999709e+03 2.746420000000000073e+03 4.821350000000000364e+03 2.745510000000000218e+03 4.784699999999999818e+03 2.744599999999999909e+03 5.062569999999999709e+03 2.743699999999999818e+03 4.876930000000000291e+03 2.742789999999999964e+03 4.834770000000000437e+03 2.741889999999999873e+03 4.651989999999999782e+03 2.740980000000000018e+03 4.764270000000000437e+03 2.740070000000000164e+03 4.826899999999999636e+03 2.739159999999999854e+03 4.917069999999999709e+03 2.738260000000000218e+03 4.781239999999999782e+03 2.737349999999999909e+03 4.918229999999999563e+03 2.736440000000000055e+03 4.809989999999999782e+03 2.735539999999999964e+03 5.021300000000000182e+03 2.734630000000000109e+03 5.006720000000000255e+03 2.733719999999999800e+03 4.923300000000000182e+03 2.732809999999999945e+03 4.925250000000000000e+03 2.731900000000000091e+03 4.896920000000000073e+03 2.730989999999999782e+03 4.852090000000000146e+03 2.730090000000000146e+03 5.002649999999999636e+03 2.729179999999999836e+03 4.842260000000000218e+03 2.728269999999999982e+03 4.959760000000000218e+03 2.727360000000000127e+03 4.975460000000000036e+03 2.726449999999999818e+03 5.013149999999999636e+03 2.725539999999999964e+03 4.935350000000000364e+03 2.724630000000000109e+03 4.948289999999999964e+03 2.723719999999999800e+03 4.994220000000000255e+03 2.722809999999999945e+03 4.856000000000000000e+03 2.721900000000000091e+03 4.770029999999999745e+03 2.720989999999999782e+03 4.903869999999999891e+03 2.720079999999999927e+03 4.963500000000000000e+03 2.719170000000000073e+03 4.992909999999999854e+03 2.718260000000000218e+03 4.931689999999999600e+03 2.717349999999999909e+03 4.999520000000000437e+03 2.716440000000000055e+03 4.976750000000000000e+03 2.715530000000000200e+03 5.025340000000000146e+03 2.714619999999999891e+03 5.115079999999999927e+03 2.713710000000000036e+03 4.883779999999999745e+03 2.712800000000000182e+03 5.121649999999999636e+03 2.711880000000000109e+03 4.906880000000000109e+03 2.710969999999999800e+03 5.067880000000000109e+03 2.710059999999999945e+03 5.058829999999999927e+03 2.709150000000000091e+03 5.121060000000000400e+03 2.708239999999999782e+03 4.892729999999999563e+03 2.707329999999999927e+03 4.976890000000000327e+03 2.706409999999999854e+03 4.970600000000000364e+03 2.705500000000000000e+03 5.016369999999999891e+03 2.704590000000000146e+03 5.012810000000000400e+03 2.703679999999999836e+03 4.946270000000000437e+03 2.702760000000000218e+03 4.907130000000000109e+03 2.701849999999999909e+03 4.966560000000000400e+03 2.700940000000000055e+03 5.023239999999999782e+03 2.700019999999999982e+03 5.049800000000000182e+03 2.699110000000000127e+03 5.092760000000000218e+03 2.698199999999999818e+03 4.982489999999999782e+03 2.697280000000000200e+03 4.989880000000000109e+03 2.696369999999999891e+03 5.000010000000000218e+03 2.695449999999999818e+03 4.974590000000000146e+03 2.694539999999999964e+03 5.099560000000000400e+03 2.693619999999999891e+03 4.997579999999999927e+03 2.692710000000000036e+03 4.955760000000000218e+03 2.691800000000000182e+03 4.957689999999999600e+03 2.690880000000000109e+03 5.041590000000000146e+03 2.689969999999999800e+03 5.038039999999999964e+03 2.689050000000000182e+03 5.086390000000000327e+03 2.688130000000000109e+03 5.077369999999999891e+03 2.687219999999999800e+03 5.060159999999999854e+03 2.686300000000000182e+03 4.884600000000000364e+03 2.685389999999999873e+03 5.077630000000000109e+03 2.684469999999999800e+03 5.003100000000000364e+03 2.683559999999999945e+03 5.073250000000000000e+03 2.682639999999999873e+03 4.952359999999999673e+03 2.681719999999999800e+03 5.055229999999999563e+03 2.680809999999999945e+03 5.147149999999999636e+03 2.679889999999999873e+03 5.116310000000000400e+03 2.678969999999999800e+03 5.000949999999999818e+03 2.678059999999999945e+03 5.035579999999999927e+03 2.677139999999999873e+03 5.097460000000000036e+03 2.676219999999999800e+03 5.200199999999999818e+03 2.675300000000000182e+03 5.035850000000000364e+03 2.674389999999999873e+03 5.135840000000000146e+03 2.673469999999999800e+03 5.094140000000000327e+03 2.672550000000000182e+03 5.207710000000000036e+03 2.671630000000000109e+03 5.048909999999999854e+03 2.670710000000000036e+03 5.157010000000000218e+03 2.669800000000000182e+03 5.112609999999999673e+03 2.668880000000000109e+03 5.095449999999999818e+03 2.667960000000000036e+03 5.121829999999999927e+03 2.667039999999999964e+03 5.042090000000000146e+03 2.666119999999999891e+03 5.063029999999999745e+03 2.665199999999999818e+03 4.931640000000000327e+03 2.664280000000000200e+03 5.121220000000000255e+03 2.663360000000000127e+03 5.169329999999999927e+03 2.662440000000000055e+03 5.086930000000000291e+03 2.661519999999999982e+03 4.996390000000000327e+03 2.660610000000000127e+03 5.071680000000000291e+03 2.659690000000000055e+03 5.030100000000000364e+03 2.658769999999999982e+03 4.972220000000000255e+03 2.657840000000000146e+03 5.115390000000000327e+03 2.656920000000000073e+03 5.166170000000000073e+03 2.656000000000000000e+03 5.029539999999999964e+03 2.655079999999999927e+03 5.085739999999999782e+03 2.654159999999999854e+03 4.979029999999999745e+03 2.653239999999999782e+03 5.024369999999999891e+03 2.652320000000000164e+03 5.009979999999999563e+03 2.651400000000000091e+03 5.082430000000000291e+03 2.650480000000000018e+03 5.021920000000000073e+03 2.649559999999999945e+03 5.075350000000000364e+03 2.648630000000000109e+03 5.020279999999999745e+03 2.647710000000000036e+03 5.155050000000000182e+03 2.646789999999999964e+03 4.994250000000000000e+03 2.645869999999999891e+03 5.101869999999999891e+03 2.644949999999999818e+03 4.962810000000000400e+03 2.644019999999999982e+03 4.921350000000000364e+03 2.643099999999999909e+03 5.026220000000000255e+03 2.642179999999999836e+03 4.957680000000000291e+03 2.641250000000000000e+03 5.178979999999999563e+03 2.640329999999999927e+03 5.061680000000000291e+03 2.639409999999999854e+03 5.022949999999999818e+03 2.638480000000000018e+03 4.995060000000000400e+03 2.637559999999999945e+03 4.929279999999999745e+03 2.636639999999999873e+03 4.979899999999999636e+03 2.635710000000000036e+03 5.016970000000000255e+03 2.634789999999999964e+03 5.026979999999999563e+03 2.633869999999999891e+03 5.045100000000000364e+03 2.632940000000000055e+03 4.946909999999999854e+03 2.632019999999999982e+03 5.073199999999999818e+03 2.631090000000000146e+03 5.153489999999999782e+03 2.630170000000000073e+03 5.095880000000000109e+03 2.629239999999999782e+03 5.070729999999999563e+03 2.628320000000000164e+03 5.059090000000000146e+03 2.627389999999999873e+03 5.147430000000000291e+03 2.626469999999999800e+03 5.162810000000000400e+03 2.625539999999999964e+03 4.832420000000000073e+03 2.624619999999999891e+03 4.953170000000000073e+03 2.623690000000000055e+03 4.987470000000000255e+03 2.622760000000000218e+03 5.032550000000000182e+03 2.621840000000000146e+03 5.101920000000000073e+03 2.620909999999999854e+03 4.893260000000000218e+03 2.619980000000000018e+03 4.976119999999999891e+03 2.619059999999999945e+03 5.026579999999999927e+03 2.618130000000000109e+03 4.847710000000000036e+03 2.617199999999999818e+03 5.033029999999999745e+03 2.616280000000000200e+03 5.078060000000000400e+03 2.615349999999999909e+03 5.023300000000000182e+03 2.614420000000000073e+03 4.971260000000000218e+03 2.613500000000000000e+03 4.881500000000000000e+03 2.612570000000000164e+03 4.937300000000000182e+03 2.611639999999999873e+03 4.796380000000000109e+03 2.610710000000000036e+03 4.927609999999999673e+03 2.609780000000000200e+03 5.048029999999999745e+03 2.608860000000000127e+03 5.130699999999999818e+03 2.607929999999999836e+03 5.027529999999999745e+03 2.607000000000000000e+03 5.010560000000000400e+03 2.606070000000000164e+03 4.990899999999999636e+03 2.605139999999999873e+03 4.963170000000000073e+03 2.604210000000000036e+03 4.983890000000000327e+03 2.603280000000000200e+03 4.993840000000000146e+03 2.602349999999999909e+03 5.038760000000000218e+03 2.601420000000000073e+03 4.895380000000000109e+03 2.600500000000000000e+03 5.034430000000000291e+03 2.599570000000000164e+03 4.848079999999999927e+03 2.598639999999999873e+03 5.059680000000000291e+03 2.597710000000000036e+03 5.015840000000000146e+03 2.596780000000000200e+03 4.961279999999999745e+03 2.595840000000000146e+03 5.070640000000000327e+03 2.594909999999999854e+03 4.903239999999999782e+03 2.593980000000000018e+03 5.036750000000000000e+03 2.593050000000000182e+03 5.043989999999999782e+03 2.592119999999999891e+03 5.005569999999999709e+03 2.591190000000000055e+03 5.109470000000000255e+03 2.590260000000000218e+03 4.891189999999999600e+03 2.589329999999999927e+03 5.000439999999999600e+03 2.588400000000000091e+03 4.954000000000000000e+03 2.587460000000000036e+03 5.020279999999999745e+03 2.586530000000000200e+03 5.091890000000000327e+03 2.585599999999999909e+03 4.932810000000000400e+03 2.584670000000000073e+03 5.071460000000000036e+03 2.583739999999999782e+03 4.917789999999999964e+03 2.582800000000000182e+03 4.938439999999999600e+03 2.581869999999999891e+03 5.015369999999999891e+03 2.580940000000000055e+03 5.054760000000000218e+03 2.580010000000000218e+03 4.917270000000000437e+03 2.579070000000000164e+03 4.948619999999999891e+03 2.578139999999999873e+03 4.800470000000000255e+03 2.577210000000000036e+03 5.078250000000000000e+03 2.576269999999999982e+03 5.120270000000000437e+03 2.575340000000000146e+03 4.942710000000000036e+03 2.574400000000000091e+03 5.000800000000000182e+03 2.573469999999999800e+03 5.077609999999999673e+03 2.572539999999999964e+03 4.860010000000000218e+03 2.571599999999999909e+03 5.030470000000000255e+03 2.570670000000000073e+03 5.133989999999999782e+03 2.569730000000000018e+03 4.969979999999999563e+03 2.568800000000000182e+03 5.017310000000000400e+03 2.567860000000000127e+03 5.045909999999999854e+03 2.566929999999999836e+03 5.074489999999999782e+03 2.565989999999999782e+03 4.958720000000000255e+03 2.565059999999999945e+03 4.794880000000000109e+03 2.564119999999999891e+03 4.906329999999999927e+03 2.563190000000000055e+03 5.105920000000000073e+03 2.562250000000000000e+03 5.177220000000000255e+03 2.561309999999999945e+03 4.976050000000000182e+03 2.560380000000000109e+03 5.033989999999999782e+03 2.559440000000000055e+03 4.910340000000000146e+03 2.558510000000000218e+03 5.000310000000000400e+03 2.557570000000000164e+03 5.004829999999999927e+03 2.556630000000000109e+03 5.049390000000000327e+03 2.555699999999999818e+03 5.120609999999999673e+03 2.554760000000000218e+03 5.071750000000000000e+03 2.553820000000000164e+03 5.052250000000000000e+03 2.552880000000000109e+03 4.979420000000000073e+03 2.551949999999999818e+03 5.125270000000000437e+03 2.551010000000000218e+03 5.121770000000000437e+03 2.550070000000000164e+03 5.142260000000000218e+03 2.549130000000000109e+03 5.120100000000000364e+03 2.548190000000000055e+03 5.153909999999999854e+03 2.547260000000000218e+03 5.206359999999999673e+03 2.546320000000000164e+03 5.200180000000000291e+03 2.545380000000000109e+03 5.199329999999999927e+03 2.544440000000000055e+03 5.158529999999999745e+03 2.543500000000000000e+03 5.218939999999999600e+03 2.542559999999999945e+03 5.013090000000000146e+03 2.541619999999999891e+03 5.126729999999999563e+03 2.540679999999999836e+03 5.125890000000000327e+03 2.539739999999999782e+03 5.337939999999999600e+03 2.538809999999999945e+03 5.342390000000000327e+03 2.537869999999999891e+03 5.211170000000000073e+03 2.536929999999999836e+03 5.109250000000000000e+03 2.535989999999999782e+03 5.084479999999999563e+03 2.535050000000000182e+03 5.123539999999999964e+03 2.534099999999999909e+03 5.377909999999999854e+03 2.533159999999999854e+03 5.228180000000000291e+03 2.532219999999999800e+03 5.160890000000000327e+03 2.531280000000000200e+03 5.234449999999999818e+03 2.530340000000000146e+03 5.313289999999999964e+03 2.529400000000000091e+03 5.126489999999999782e+03 2.528460000000000036e+03 5.192050000000000182e+03 2.527519999999999982e+03 5.161989999999999782e+03 2.526579999999999927e+03 5.190359999999999673e+03 2.525630000000000109e+03 5.258520000000000437e+03 2.524690000000000055e+03 5.186010000000000218e+03 2.523750000000000000e+03 5.171890000000000327e+03 2.522809999999999945e+03 5.117989999999999782e+03 2.521869999999999891e+03 5.114500000000000000e+03 2.520920000000000073e+03 5.240970000000000255e+03 2.519980000000000018e+03 5.210949999999999818e+03 2.519039999999999964e+03 5.345319999999999709e+03 2.518090000000000146e+03 5.219850000000000364e+03 2.517150000000000091e+03 5.245500000000000000e+03 2.516210000000000036e+03 5.263199999999999818e+03 2.515260000000000218e+03 5.323279999999999745e+03 2.514320000000000164e+03 5.216439999999999600e+03 2.513380000000000109e+03 5.046069999999999709e+03 2.512429999999999836e+03 5.161770000000000437e+03 2.511489999999999782e+03 5.298619999999999891e+03 2.510539999999999964e+03 5.284520000000000437e+03 2.509599999999999909e+03 5.114239999999999782e+03 2.508659999999999854e+03 5.258979999999999563e+03 2.507710000000000036e+03 5.107279999999999745e+03 2.506769999999999982e+03 5.222859999999999673e+03 2.505820000000000164e+03 5.346340000000000146e+03 2.504880000000000109e+03 5.194710000000000036e+03 2.503929999999999836e+03 5.151550000000000182e+03 2.502980000000000018e+03 5.166569999999999709e+03 2.502039999999999964e+03 5.226529999999999745e+03 2.501090000000000146e+03 5.265329999999999927e+03 2.500150000000000091e+03 5.396609999999999673e+03 2.499199999999999818e+03 5.139420000000000073e+03 2.498250000000000000e+03 5.225760000000000218e+03 2.497309999999999945e+03 5.253960000000000036e+03 2.496360000000000127e+03 5.311210000000000036e+03 2.495409999999999854e+03 5.373720000000000255e+03 2.494469999999999800e+03 5.169539999999999964e+03 2.493519999999999982e+03 5.282210000000000036e+03 2.492570000000000164e+03 5.239119999999999891e+03 2.491630000000000109e+03 5.182850000000000364e+03 2.490679999999999836e+03 5.414189999999999600e+03 2.489730000000000018e+03 5.241829999999999927e+03 2.488780000000000200e+03 5.325380000000000109e+03 2.487840000000000146e+03 5.187369999999999891e+03 2.486889999999999873e+03 5.363189999999999600e+03 2.485940000000000055e+03 5.299039999999999964e+03 2.484989999999999782e+03 5.443149999999999636e+03 2.484039999999999964e+03 5.326300000000000182e+03 2.483090000000000146e+03 5.230569999999999709e+03 2.482139999999999873e+03 5.387789999999999964e+03 2.481199999999999818e+03 5.360569999999999709e+03 2.480250000000000000e+03 5.275409999999999854e+03 2.479300000000000182e+03 5.298250000000000000e+03 2.478349999999999909e+03 5.294750000000000000e+03 2.477400000000000091e+03 5.346539999999999964e+03 2.476449999999999818e+03 5.232479999999999563e+03 2.475500000000000000e+03 5.329000000000000000e+03 2.474550000000000182e+03 5.291289999999999964e+03 2.473599999999999909e+03 5.429859999999999673e+03 2.472650000000000091e+03 5.250109999999999673e+03 2.471699999999999818e+03 5.349189999999999600e+03 2.470750000000000000e+03 5.180029999999999745e+03 2.469789999999999964e+03 5.376359999999999673e+03 2.468840000000000146e+03 5.435939999999999600e+03 2.467889999999999873e+03 5.382479999999999563e+03 2.466940000000000055e+03 5.470949999999999818e+03 2.465989999999999782e+03 5.359710000000000036e+03 2.465039999999999964e+03 5.490170000000000073e+03 2.464090000000000146e+03 5.460390000000000327e+03 2.463130000000000109e+03 5.522520000000000437e+03 2.462179999999999836e+03 5.474359999999999673e+03 2.461230000000000018e+03 5.575840000000000146e+03 2.460280000000000200e+03 5.485689999999999600e+03 2.459320000000000164e+03 5.524159999999999854e+03 2.458369999999999891e+03 5.573109999999999673e+03 2.457420000000000073e+03 5.577439999999999600e+03 2.456469999999999800e+03 5.712930000000000291e+03 2.455510000000000218e+03 5.843119999999999891e+03 2.454559999999999945e+03 5.886750000000000000e+03 2.453610000000000127e+03 5.948710000000000036e+03 2.452650000000000091e+03 6.120739999999999782e+03 2.451699999999999818e+03 6.151189999999999600e+03 2.450739999999999782e+03 6.176390000000000327e+03 2.449789999999999964e+03 6.259220000000000255e+03 2.448829999999999927e+03 6.394409999999999854e+03 2.447880000000000109e+03 6.529560000000000400e+03 2.446929999999999836e+03 6.861079999999999927e+03 2.445969999999999800e+03 6.731659999999999854e+03 2.445019999999999982e+03 7.128470000000000255e+03 2.444059999999999945e+03 7.211069999999999709e+03 2.443110000000000127e+03 7.267460000000000036e+03 2.442150000000000091e+03 7.467750000000000000e+03 2.441190000000000055e+03 7.594720000000000255e+03 2.440239999999999782e+03 7.632710000000000036e+03 2.439280000000000200e+03 7.550390000000000327e+03 2.438329999999999927e+03 7.439329999999999927e+03 2.437369999999999891e+03 7.401510000000000218e+03 2.436409999999999854e+03 7.353250000000000000e+03 2.435460000000000036e+03 7.203069999999999709e+03 2.434500000000000000e+03 7.092140000000000327e+03 2.433539999999999964e+03 7.028279999999999745e+03 2.432590000000000146e+03 6.886060000000000400e+03 2.431630000000000109e+03 6.482699999999999818e+03 2.430670000000000073e+03 6.460750000000000000e+03 2.429719999999999800e+03 6.245590000000000146e+03 2.428760000000000218e+03 6.033109999999999673e+03 2.427800000000000182e+03 6.165239999999999782e+03 2.426840000000000146e+03 6.052010000000000218e+03 2.425880000000000109e+03 5.829229999999999563e+03 2.424929999999999836e+03 5.718699999999999818e+03 2.423969999999999800e+03 5.608210000000000036e+03 2.423010000000000218e+03 5.573390000000000327e+03 2.422050000000000182e+03 5.379510000000000218e+03 2.421090000000000146e+03 5.365600000000000364e+03 2.420130000000000109e+03 5.359510000000000218e+03 2.419170000000000073e+03 5.324750000000000000e+03 2.418210000000000036e+03 5.321270000000000437e+03 2.417250000000000000e+03 5.239630000000000109e+03 2.416289999999999964e+03 5.152810000000000400e+03 2.415329999999999927e+03 5.232710000000000036e+03 2.414369999999999891e+03 5.278729999999999563e+03 2.413409999999999854e+03 5.236199999999999818e+03 2.412449999999999818e+03 5.099970000000000255e+03 2.411489999999999782e+03 5.109550000000000182e+03 2.410530000000000200e+03 5.251850000000000364e+03 2.409570000000000164e+03 5.092260000000000218e+03 2.408610000000000127e+03 4.953539999999999964e+03 2.407650000000000091e+03 5.080189999999999600e+03 2.406690000000000055e+03 5.105369999999999891e+03 2.405730000000000018e+03 5.221550000000000182e+03 2.404769999999999982e+03 5.085500000000000000e+03 2.403800000000000182e+03 5.030069999999999709e+03 2.402840000000000146e+03 5.122819999999999709e+03 2.401880000000000109e+03 4.997239999999999782e+03 2.400920000000000073e+03 4.910680000000000291e+03 2.399960000000000036e+03 5.003399999999999636e+03 2.398989999999999782e+03 5.031149999999999636e+03 2.398030000000000200e+03 4.861529999999999745e+03 2.397070000000000164e+03 4.850340000000000146e+03 2.396099999999999909e+03 4.849550000000000182e+03 2.395139999999999873e+03 5.040840000000000146e+03 2.394179999999999836e+03 4.907649999999999636e+03 2.393210000000000036e+03 4.943180000000000291e+03 2.392250000000000000e+03 4.797079999999999927e+03 2.391289999999999964e+03 4.905239999999999782e+03 2.390320000000000164e+03 4.849979999999999563e+03 2.389360000000000127e+03 4.877710000000000036e+03 2.388400000000000091e+03 4.843199999999999818e+03 2.387429999999999836e+03 4.961649999999999636e+03 2.386469999999999800e+03 4.839020000000000437e+03 2.385500000000000000e+03 4.918569999999999709e+03 2.384539999999999964e+03 4.925529999999999745e+03 2.383570000000000164e+03 4.955810000000000400e+03 2.382610000000000127e+03 5.043069999999999709e+03 2.381639999999999873e+03 4.949010000000000218e+03 2.380679999999999836e+03 4.854989999999999782e+03 2.379710000000000036e+03 4.680729999999999563e+03 2.378739999999999782e+03 4.822329999999999927e+03 2.377780000000000200e+03 4.844840000000000146e+03 2.376809999999999945e+03 5.032939999999999600e+03 2.375849999999999909e+03 4.910520000000000437e+03 2.374880000000000109e+03 4.883840000000000146e+03 2.373909999999999854e+03 4.895979999999999563e+03 2.372949999999999818e+03 4.799489999999999782e+03 2.371980000000000018e+03 4.819390000000000327e+03 2.371010000000000218e+03 4.738470000000000255e+03 2.370039999999999964e+03 4.724770000000000437e+03 2.369079999999999927e+03 4.744670000000000073e+03 2.368110000000000127e+03 4.844659999999999854e+03 2.367139999999999873e+03 4.880029999999999745e+03 2.366170000000000073e+03 4.783659999999999854e+03 2.365210000000000036e+03 4.733810000000000400e+03 2.364239999999999782e+03 4.820829999999999927e+03 2.363269999999999982e+03 4.882000000000000000e+03 2.362300000000000182e+03 4.845060000000000400e+03 2.361329999999999927e+03 4.841680000000000291e+03 2.360360000000000127e+03 4.915720000000000255e+03 2.359389999999999873e+03 4.778180000000000291e+03 2.358429999999999836e+03 4.679369999999999891e+03 2.357460000000000036e+03 4.761140000000000327e+03 2.356489999999999782e+03 4.652050000000000182e+03 2.355519999999999982e+03 4.829189999999999600e+03 2.354550000000000182e+03 4.761369999999999891e+03 2.353579999999999927e+03 4.892039999999999964e+03 2.352610000000000127e+03 4.870630000000000109e+03 2.351639999999999873e+03 4.715229999999999563e+03 2.350670000000000073e+03 4.905100000000000364e+03 2.349699999999999818e+03 4.806409999999999854e+03 2.348719999999999800e+03 4.893189999999999600e+03 2.347750000000000000e+03 4.869210000000000036e+03 2.346780000000000200e+03 4.842670000000000073e+03 2.345809999999999945e+03 4.852170000000000073e+03 2.344840000000000146e+03 4.812770000000000437e+03 2.343869999999999891e+03 4.783670000000000073e+03 2.342900000000000091e+03 4.798329999999999927e+03 2.341929999999999836e+03 4.833550000000000182e+03 2.340949999999999818e+03 4.801899999999999636e+03 2.339980000000000018e+03 4.690529999999999745e+03 2.339010000000000218e+03 4.826029999999999745e+03 2.338039999999999964e+03 4.750689999999999600e+03 2.337059999999999945e+03 4.737060000000000400e+03 2.336090000000000146e+03 4.700310000000000400e+03 2.335119999999999891e+03 4.776619999999999891e+03 2.334139999999999873e+03 4.739869999999999891e+03 2.333170000000000073e+03 4.800739999999999782e+03 2.332199999999999818e+03 4.897539999999999964e+03 2.331219999999999800e+03 4.901880000000000109e+03 2.330250000000000000e+03 4.731630000000000109e+03 2.329280000000000200e+03 4.933640000000000327e+03 2.328300000000000182e+03 4.714680000000000291e+03 2.327329999999999927e+03 4.754960000000000036e+03 2.326349999999999909e+03 4.887590000000000146e+03 2.325380000000000109e+03 4.853439999999999600e+03 2.324400000000000091e+03 4.839819999999999709e+03 2.323429999999999836e+03 4.841600000000000364e+03 2.322449999999999818e+03 4.848489999999999782e+03 2.321480000000000018e+03 4.757970000000000255e+03 2.320500000000000000e+03 4.680300000000000182e+03 2.319530000000000200e+03 4.712850000000000364e+03 2.318550000000000182e+03 4.694140000000000327e+03 2.317579999999999927e+03 4.759979999999999563e+03 2.316599999999999909e+03 4.682350000000000364e+03 2.315619999999999891e+03 4.822439999999999600e+03 2.314650000000000091e+03 4.714100000000000364e+03 2.313670000000000073e+03 4.667250000000000000e+03 2.312699999999999818e+03 4.784229999999999563e+03 2.311719999999999800e+03 4.691310000000000400e+03 2.310739999999999782e+03 4.703329999999999927e+03 2.309760000000000218e+03 4.592539999999999964e+03 2.308789999999999964e+03 4.701789999999999964e+03 2.307809999999999945e+03 4.703579999999999927e+03 2.306829999999999927e+03 4.792310000000000400e+03 2.305849999999999909e+03 4.765949999999999818e+03 2.304880000000000109e+03 4.660359999999999673e+03 2.303900000000000091e+03 4.692819999999999709e+03 2.302920000000000073e+03 4.784050000000000182e+03 2.301940000000000055e+03 4.624850000000000364e+03 2.300960000000000036e+03 4.695619999999999891e+03 2.299980000000000018e+03 4.526260000000000218e+03 -------------- next part -------------- A non-text attachment was scrubbed... Name: odrfit.png Type: image/png Size: 55094 bytes Desc: not available URL: From pav at iki.fi Wed Jul 4 15:38:39 2012 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 04 Jul 2012 21:38:39 +0200 Subject: [SciPy-User] scipy.odr question In-Reply-To: <4FF47032.6090305@gmx.net> References: <4FF47032.6090305@gmx.net> Message-ID: 04.07.2012 18:32, mogliii kirjoitti: > I tried to fit a lorentzian to some code. Below is my working example. > Strangely for the odr-fitting the result are far off, even though the > starting values are very reasonable. I also attached a plot of it. > > What is going wrong with odr fit? [clip] > mydata = odr.RealData(x, y, sx = x, sy = y) What you are effectively saying here is that your data has 100% uncertainty both in x and y coordinates. This is a very different assumption that what goes into a least-squares fit (no error in x-coordinate), so it's no wonder the results differ. The ODR fit is most likely the correct solution to the problem you are posing, but the problem you tell it to solve is not probably what you have in mind. -- Pauli Virtanen From mogliii at gmx.net Wed Jul 4 16:06:10 2012 From: mogliii at gmx.net (Mogliii) Date: Wed, 04 Jul 2012 21:06:10 +0100 Subject: [SciPy-User] scipy.odr question In-Reply-To: References: <4FF47032.6090305@gmx.net> Message-ID: <4FF4A232.3080401@gmx.net> > [clip] >> mydata = odr.RealData(x, y, sx = x, sy = y) > What you are effectively saying here is that your data has 100% > uncertainty both in x and y coordinates. This is a very different > assumption that what goes into a least-squares fit (no error in > x-coordinate), so it's no wonder the results differ. > > The ODR fit is most likely the correct solution to the problem you are > posing, but the problem you tell it to solve is not probably what you > have in mind. I have no error data, so assuming there is no uncertainty: 1) If I omit sx and sy I get the same result. 2) If I use sx = np.zeros_like(x) I get "RuntimeWarning: divide by zero encountered in divide" 3) If I use sx = np.ones_like(x) I get the same result Could you please hint me how to modify the code to honor the fact that there is no error data? Very much appreciated Mogliii PS: I forgot to introduce myself to the mailing list before. Hi everyone! From pav at iki.fi Wed Jul 4 16:36:38 2012 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 04 Jul 2012 22:36:38 +0200 Subject: [SciPy-User] scipy.odr question In-Reply-To: <4FF4A232.3080401@gmx.net> References: <4FF47032.6090305@gmx.net> <4FF4A232.3080401@gmx.net> Message-ID: 04.07.2012 22:06, Mogliii kirjoitti: [clip] > Could you please hint me how to modify the code to honor the fact that > there is no error data? I think you cannot tell it that the errors are exactly zero. However, to get least-squares result, you need to say that y-errors are way bigger than x-errors, for example sx=1e-99, sy=1.0 -- Pauli Virtanen From mogliii at gmx.net Wed Jul 4 16:51:52 2012 From: mogliii at gmx.net (Mogliii) Date: Wed, 04 Jul 2012 21:51:52 +0100 Subject: [SciPy-User] scipy.odr question In-Reply-To: References: <4FF47032.6090305@gmx.net> <4FF4A232.3080401@gmx.net> Message-ID: <4FF4ACE8.7030701@gmx.net> > I think you cannot tell it that the errors are exactly zero. However, to > get least-squares result, you need to say that y-errors are way bigger > than x-errors, for example > > sx=1e-99, sy=1.0 > You are right, see attached image. Now I will need to find out why peak-o-mat finds a better fit than mpfit and leastsq... Also good to know that the uncertainty can be given as a scalar and doesn't have to be a list. Mogliii -------------- next part -------------- A non-text attachment was scrubbed... Name: odrfit.png Type: image/png Size: 52744 bytes Desc: not available URL: From mogliii at gmx.net Wed Jul 4 18:37:11 2012 From: mogliii at gmx.net (Mogliii) Date: Wed, 04 Jul 2012 23:37:11 +0100 Subject: [SciPy-User] scipy.odr question In-Reply-To: References: <4FF47032.6090305@gmx.net> <4FF4A232.3080401@gmx.net> Message-ID: <4FF4C597.8040301@gmx.net> ODR Rocks! At least for my data. I fits much better than scipy.optimize.leastsq and mpfit.py. See for yourself. The component curves belong to the odr-fit. -------------- next part -------------- A non-text attachment was scrubbed... Name: odrfit.pdf Type: application/pdf Size: 38353 bytes Desc: not available URL: From elcortogm at googlemail.com Thu Jul 5 07:20:39 2012 From: elcortogm at googlemail.com (Steve Schmerler) Date: Thu, 5 Jul 2012 13:20:39 +0200 Subject: [SciPy-User] building numpy + scipy with MKL 10.3 and icc + ifort 12.1 Message-ID: <20120705112039.GA11171@cartman.physik.tu-freiberg.de> Hello I built numpy 1.6.2 and scipy 0.10.1 with MKL 10.3, using ifort+icc or ifort+gcc, following [1,2] on a Red Hat based box. Build details are at the end of this mail. numpy ----- Using gcc, all tests pass. Using icc, I get some messages like:: .../numpy/core/tests/test_umath.py:570: RuntimeWarning: invalid value encountered in fmax assert_equal(np.fmax(arg1, arg2), out) all in test_umath.py. No test fails however. Is this a problem? scipy ----- With both gcc and icc, I get 3 errors. ====================================================================== ERROR: test_simple_fat (test_decomp.TestRQ) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/schmerler/soft/lib/python2.7/site-packages/scipy/linalg/tests/test_decomp.py", line 1155, in test_simple_fat r,q = rq(a) File "/home/schmerler/soft/lib/python2.7/site-packages/scipy/linalg/decomp_qr.py", line 293, in rq rq, tau, work, info = gerqf(a1, lwork=lwork, overwrite_a=overwrite_a) error: (lwork>=n||lwork==-1) failed for 1st keyword lwork: dgerqf:lwork=2 ====================================================================== ERROR: test_simple_trap (test_decomp.TestRQ) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/schmerler/soft/lib/python2.7/site-packages/scipy/linalg/tests/test_decomp.py", line 1143, in test_simple_trap r,q = rq(a) File "/home/schmerler/soft/lib/python2.7/site-packages/scipy/linalg/decomp_qr.py", line 293, in rq rq, tau, work, info = gerqf(a1, lwork=lwork, overwrite_a=overwrite_a) error: (lwork>=n||lwork==-1) failed for 1st keyword lwork: dgerqf:lwork=2 and one which says ".../atlas_version.so: undefined symbol: ATL_buildinfo" but that's probably because I don't have ATLAS. Have others seen the TestRQ error with MKL? I haven't built against normal blas/lapack yet to check if this may be a scipy issue. Thanks. ---------------------------------------------------------------------- Build details ---------------------------------------------------------------------- python 2.7.1 numpy 1.6.2 scipy 0.10.1 MKL 10.3, ifort/icc 12.1 gcc 4.1.2 (same was used to build python) numpy site.cfg: [DEFAULT] library_dirs = /usr/lib64 include_dirs = /usr/include [mkl] library_dirs = /cm/shared/apps/intel-cs/composer_xe_2011_sp1.9.293/mkl/lib/intel64 include_dirs = /cm/shared/apps/intel-cs/composer_xe_2011_sp1.9.293/mkl/include lapack_libs = mkl_intel_lp64, mkl_sequential, mkl_core mkl_libs = mkl_intel_lp64, mkl_sequential, mkl_core, mkl_mc3, mkl_def modified: numpy/distutils/intelccompiler.py (IntelEM64TCCompiler) icc -O2 -xhost -fPIC -fp-model strict -fomit-frame-pointer numpy/distutils/fcompiler/intel.py ifort -O2 -xhost -fp-model strict -fPIC install (numpy,scipy): gcc + ifort python setup.py build --fcompiler=intelem icc + ifort python setup.py build --compiler=intelem --fcompiler=intelem [1] http://software.intel.com/en-us/articles/numpy-scipy-with-mkl/ [2] http://www.scipy.org/Installing_SciPy/Linux#head-0b5ce001569a20ddbbdb2187578000372a09acb1 best, Steve From pav at iki.fi Thu Jul 5 13:54:08 2012 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 05 Jul 2012 19:54:08 +0200 Subject: [SciPy-User] building numpy + scipy with MKL 10.3 and icc + ifort 12.1 In-Reply-To: <20120705112039.GA11171@cartman.physik.tu-freiberg.de> References: <20120705112039.GA11171@cartman.physik.tu-freiberg.de> Message-ID: 05.07.2012 13:20, Steve Schmerler kirjoitti: [clip] > numpy > ----- > Using gcc, all tests pass. Using icc, I get some messages like:: > > .../numpy/core/tests/test_umath.py:570: RuntimeWarning: invalid > value encountered in fmax > assert_equal(np.fmax(arg1, arg2), out) > > all in test_umath.py. No test fails however. Is this a problem? [clip] Not a problem --- it's just an indication that some tests encounter NaNs. [clip] > With both gcc and icc, I get 3 errors. > > ====================================================================== > ERROR: test_simple_fat (test_decomp.TestRQ) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/schmerler/soft/lib/python2.7/site-packages/scipy/linalg/tests/test_decomp.py", line 1155, in test_simple_fat > r,q = rq(a) > File "/home/schmerler/soft/lib/python2.7/site-packages/scipy/linalg/decomp_qr.py", line 293, in rq > rq, tau, work, info = gerqf(a1, lwork=lwork, overwrite_a=overwrite_a) > error: (lwork>=n||lwork==-1) failed for 1st keyword lwork: dgerqf:lwork=2 There was an actual bug in the RQ codes on the Scipy side in 0.10.1, https://github.com/scipy/scipy/commit/ed29437a98d1 Doesn't appear always, though, but MKL may make the difference. Can you try if it is fixed (should be) in the the 0.11.0 beta? -- Pauli Virtanen From josef.pktd at gmail.com Fri Jul 6 12:29:11 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 6 Jul 2012 12:29:11 -0400 Subject: [SciPy-User] ANN: statsmodels 0.4.3 Message-ID: We are pleased to announce the release of statsmodels 0.4.3. statsmodels is a general purpose statistics and econometrics package written in Python with some optional Cython extensions. Compared to release 0.4.0, this release contains bug-fixes, code and documentation cleanup, some enhancements, and improved compatibility across platforms and compatibility with python 3.2.3, pandas 0.8 and numpy 1.6.2. More details are below. We recommend upgrading to 0.4.3 Josef and Skipper 0.4.3 ----- The only change compared to 0.4.2 is for compatibility with python 3.2.3 (changed behavior of 2to3). 0.4.2 ----- This is a bug-fix release that affects mainly Big-Endian machines. *Bug Fixes* * discrete_model.MNLogit: fix summary method * examples in documentation: correct file path * tsa.filters.hp_filter: don't use umfpack on Big-Endian machine (scipy bug) * the remaining fixes are in the test suite, either precision problems on some machines or incorrect testing on Big-Endian machines. 0.4.1 ----- This is a backwards compatible (according to our test suite) release with bug fixes and code cleanup. *Bug Fixes* * build and distribution fixes * lowess correct distance calculation * genmod correction CDFlink derivative * adfuller _autolag correct calculation of optimal lag * het_arch, het_lm : fix autolag and store options * GLSAR: incorrect whitening for lag>1 *Other Changes* * add lowess and other functions to api and documentation * rename lowess module (old import path will be removed at next release) * new robust sandwich covariance estimators, moved out of sandbox * compatibility with pandas 0.8 * new plots in statsmodels.graphics - ABLine plot - interaction plot What it is ========== Statsmodels is a Python package that provides a complement to scipy for statistical computations including descriptive statistics and estimation and inference for statistical models. Main Features ============= * linear regression models: Generalized least squares (including weighted least squares and least squares with autoregressive errors), ordinary least squares. * glm: Generalized linear models with support for all of the one-parameter exponential family distributions. * discrete: regression with discrete dependent variables, including Logit, Probit, MNLogit, Poisson, based on maximum likelihood estimators * rlm: Robust linear models with support for several M-estimators. * tsa: models for time series analysis - univariate time series analysis: AR, ARIMA - vector autoregressive models, VAR and structural VAR - descriptive statistics and process models for time series analysis * nonparametric : (Univariate) kernel density estimators * datasets: Datasets to be distributed and used for examples and in testing. * stats: a wide range of statistical tests - diagnostics and specification tests - goodness-of-fit and normality tests - functions for multiple testing - various additional statistical tests * iolib - Tools for reading Stata .dta files into numpy arrays. - printing table output to ascii, latex, and html * miscellaneous models * sandbox: statsmodels contains a sandbox folder with code in various stages of developement and testing which is not considered "production ready". This covers among others Mixed (repeated measures) Models, GARCH models, general method of moments (GMM) estimators, kernel regression, various extensions to scipy.stats.distributions, panel data models, generalized additive models and information theoretic measures. Where to get it =============== The master branch on GitHub is the most up to date code https://www.github.com/statsmodels/statsmodels Source download of release tags are available on GitHub https://github.com/statsmodels/statsmodels/tags Binaries and source distributions are available from PyPi http://pypi.python.org/pypi/statsmodels/ Installation from sources ========================= See INSTALL.txt for requirements or see the documentation http://statsmodels.sf.net/devel/install.html License ======= Modified BSD (3-clause) Documentation ============= The official documentation is hosted on SourceForge http://statsmodels.sf.net/ Windows Help ============ We are providing a Windows htmlhelp file (statsmodels.chm) that is now separately distributed, available at http://sourceforge.net/projects/statsmodels/files/statsmodels-0.4.3/statsmodelsdoc.zip/download It can be copied or moved to the installation directory of statsmodels (site-packages\statsmodels in a typical installation), and can then be opened from the python interpreter :: >>> import statsmodels.api as sm >>> sm.open_help() Discussion and Development ========================== Discussions take place on our mailing list. http://groups.google.com/group/pystatsmodels We are very interested in feedback about usability and suggestions for improvements. Bug Reports =========== Bug reports can be submitted to the issue tracker at https://github.com/statsmodels/statsmodels/issues From tmp50 at ukr.net Sat Jul 7 05:33:03 2012 From: tmp50 at ukr.net (Dmitrey) Date: Sat, 07 Jul 2012 12:33:03 +0300 Subject: [SciPy-User] [ANN] Stochastic programming and optimization addon for FuncDesigner Message-ID: <38680.1341653583.12878990481238851584@ffe15.ukr.net> hi all, you may be interested in stochastic programming and optimization with free Python module FuncDesigner. We have wrote Stochastic addon for FuncDesigner, but (at least for several years) it will be commercional (currently it's free for some small-scaled problems only and for noncommercial research / educational purposes only). However, we will try to keep our prices several times less than our competitors have. Also, we will provide some discounts, including region-based ones, and first 15 customers will also got a discount. For further information, documentation and some examples etc read more at http://openopt.org/StochasticProgramming Regards, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tenghj at mail.biols.ac.cn Fri Jul 6 12:07:03 2012 From: tenghj at mail.biols.ac.cn (Huajing Teng) Date: Sat, 7 Jul 2012 00:07:03 +0800 Subject: [SciPy-User] questions about lapack_lite.so: undefined symbol Message-ID: <201207070007032349449@mail.biols.ac.cn> Dear Drs, C compiler not fortran was used in our linux server. When i installed scipy-0.10.0, i found the listed error: ImportError: /panfs/home/sun/tenghj/public/software/numpy/lib/python2.7/site-packages/numpy/linalg/lapack_lite.so: undefined symbol: _gfortran_concat_string I wonder whether you can help me. Thanks for your kind assistance. -------------- Huajing Teng Beijing Institutes of Life Science, Chinese Academy of Sciences Beichen West Road,Chao Yang District,Beijing 100101, China E-mail:tenghj at mail.biols.ac.cn 2012-07-07 From elcortogm at googlemail.com Sat Jul 7 12:10:16 2012 From: elcortogm at googlemail.com (Steve Schmerler) Date: Sat, 7 Jul 2012 18:10:16 +0200 Subject: [SciPy-User] building numpy + scipy with MKL 10.3 and icc + ifort 12.1 In-Reply-To: References: <20120705112039.GA11171@cartman.physik.tu-freiberg.de> Message-ID: <20120707161016.GA7111@kenny.southpark.de> On Jul 05 19:54 +0200, Pauli Virtanen wrote: > There was an actual bug in the RQ codes on the Scipy side in 0.10.1, > https://github.com/scipy/scipy/commit/ed29437a98d1 > > Doesn't appear always, though, but MKL may make the difference. Can you > try if it is fixed (should be) in the the 0.11.0 beta? Yes, 0.11.0b1 works. Thanks. best, Steve From newsboost at gmail.com Sun Jul 8 00:24:32 2012 From: newsboost at gmail.com (newsboost) Date: Sun, 08 Jul 2012 06:24:32 +0200 Subject: [SciPy-User] noob question about slicing / extracting numbers from array... Message-ID: <4FF90B80.8080105@gmail.com> Test program: ------------------------------------ #!/usr/bin/python #import numpy as np import scipy import scipy.io import sys import numpy # load data timesteps = 2 bodies = 3 q = scipy.arange(0, bodies*7*timesteps).reshape(timesteps,bodies*7).T print "type(q) = ", type(q) print "q.shape =", q.shape print "q = " print q print LCS=scipy.zeros((bodies,3)) xyz=scipy.array([0,1,2]) for x in range(0,timesteps): print print "Step: ", x print "------------------" for b in range(0,bodies): # LCS[b] = q[xyz][x] print "Body: ", b, " --- q",xyz,"[",x,"] =" test = q[xyz][x] print " q[xyz][x] = ", test xyz = xyz + 7 # go to next body ------------------------------------ Output: =============== $ ./test.py type(q) = q.shape = (21, 2) q = [[ 0 21] [ 1 22] [ 2 23] [ 3 24] [ 4 25] [ 5 26] [ 6 27] [ 7 28] [ 8 29] [ 9 30] [10 31] [11 32] [12 33] [13 34] [14 35] [15 36] [16 37] [17 38] [18 39] [19 40] [20 41]] Step: 0 ------------------ Body: 0 --- q [0 1 2] [ 0 ] = q[xyz][x] = [ 0 21] Body: 1 --- q [7 8 9] [ 0 ] = q[xyz][x] = [ 7 28] Body: 2 --- q [14 15 16] [ 0 ] = q[xyz][x] = [14 35] Step: 1 ------------------ Body: 0 --- q [21 22 23] [ 1 ] = Traceback (most recent call last): File "./test.py", line 30, in test = q[xyz][x] IndexError: index (21) out of range (0<=index<20) in dimension 0 =============== What I want: ************ Step: 0 (using data from the 1.column) ------------------ Body: 0 --- q [0 1 2] [ 0 ] = q[xyz][x] = [ 0 1 2] Body: 1 --- q [7 8 9] [ 0 ] = q[xyz][x] = [ 7 8 9] Body: 2 --- q [14 15 16] [ 0 ] = q[xyz][x] = [14 15 16] Step: 1 (etc. as above, but now using the 2. column) ------------------ Body: 0 --- q [0 1 2] [ 1 ] = q[xyz][x] = [ 21 22 23] Body: 1 --- q [7 8 9] [ 1 ] = q[xyz][x] = [ 28 29 30] etc. ************ Hope you understand... It's a noob question, I know... I just cannot make it right... Thanks... From guziy.sasha at gmail.com Sun Jul 8 10:29:56 2012 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Sun, 8 Jul 2012 10:29:56 -0400 Subject: [SciPy-User] noob question about slicing / extracting numbers from array... In-Reply-To: <4FF90B80.8080105@gmail.com> References: <4FF90B80.8080105@gmail.com> Message-ID: Before the loop over bodies reinitialize xyz: for x in range(0,timesteps): print print "Step: ", x print "------------------" xyz=scipy.array([0,1,2]) #<------ reinit for b in range(0,bodies): # LCS[b] = q[xyz][x] print "Body: ", b, " --- q",xyz,"[",x,"] =" test = q[xyz][x] print " q[xyz][x] = ", test xyz = xyz + 7 # go to next body 2012/7/8 newsboost > Test program: > ------------------------------------ > #!/usr/bin/python > > #import numpy as np > import scipy > import scipy.io > import sys > import numpy > > # load data > timesteps = 2 > bodies = 3 > q = scipy.arange(0, bodies*7*timesteps).reshape(timesteps,bodies*7).T > print "type(q) = ", type(q) > print "q.shape =", q.shape > print "q = " > print q > print > > LCS=scipy.zeros((bodies,3)) > xyz=scipy.array([0,1,2]) > > for x in range(0,timesteps): > print > print "Step: ", x > print "------------------" > > for b in range(0,bodies): > # LCS[b] = q[xyz][x] > print "Body: ", b, " --- q",xyz,"[",x,"] =" > test = q[xyz][x] > print " q[xyz][x] = ", test > xyz = xyz + 7 # go to next body > ------------------------------------ > > Output: > =============== > $ ./test.py > type(q) = > q.shape = (21, 2) > q = > [[ 0 21] > [ 1 22] > [ 2 23] > [ 3 24] > [ 4 25] > [ 5 26] > [ 6 27] > [ 7 28] > [ 8 29] > [ 9 30] > [10 31] > [11 32] > [12 33] > [13 34] > [14 35] > [15 36] > [16 37] > [17 38] > [18 39] > [19 40] > [20 41]] > > > Step: 0 > ------------------ > Body: 0 --- q [0 1 2] [ 0 ] = > q[xyz][x] = [ 0 21] > Body: 1 --- q [7 8 9] [ 0 ] = > q[xyz][x] = [ 7 28] > Body: 2 --- q [14 15 16] [ 0 ] = > q[xyz][x] = [14 35] > > Step: 1 > ------------------ > Body: 0 --- q [21 22 23] [ 1 ] = > Traceback (most recent call last): > File "./test.py", line 30, in > test = q[xyz][x] > IndexError: index (21) out of range (0<=index<20) in dimension 0 > =============== > > What I want: > ************ > Step: 0 (using data from the 1.column) > ------------------ > Body: 0 --- q [0 1 2] [ 0 ] = > q[xyz][x] = [ 0 1 2] > Body: 1 --- q [7 8 9] [ 0 ] = > q[xyz][x] = [ 7 8 9] > Body: 2 --- q [14 15 16] [ 0 ] = > q[xyz][x] = [14 15 16] > > Step: 1 (etc. as above, but now using the 2. column) > ------------------ > Body: 0 --- q [0 1 2] [ 1 ] = > q[xyz][x] = [ 21 22 23] > Body: 1 --- q [7 8 9] [ 1 ] = > q[xyz][x] = [ 28 29 30] > etc. > ************ > > Hope you understand... It's a noob question, I know... I just cannot > make it right... > > Thanks... > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From newsboost at gmail.com Sun Jul 8 12:15:01 2012 From: newsboost at gmail.com (newsboost) Date: Sun, 08 Jul 2012 18:15:01 +0200 Subject: [SciPy-User] noob question about slicing / extracting numbers from array... In-Reply-To: <4FF9B021.70304@gmail.com> References: <4FF9B021.70304@gmail.com> Message-ID: <4FF9B205.1040703@gmail.com> Uh, sorry - this has exactly the same behaviour on my system as before... Why should it help to re-init xyz in your opinion in the x-loop? My problem is that I want to process all columns (only two here) one at a time and then I want to extract 3 numbers for each body. Now I only get 2 numbers... Are you familiar with the Matlab notation: q(xyz, x) ??? That is exactly what I want... 3 numbers from each x-column... That's why I tried: q[xyz][x] It seems like I'm getting two numbers from the first value (of the 3-vector xyz, so the last two numbers in xyz is not used) which corresponds to the row... But what I want is 3 numbers (those from xyz) at the current x-column... Sorry, I don't think your suggestion worked. Look forward to hear other suggestions. Thanks. On 07/08/2012 06:06 PM, newsboost wrote: > *Oleksandr > Huziy*guziy.sasha at gmail.... > /Sun Jul 8 09:29:56 CDT 2012/ > ------------------------------------------------------------------------ > Before the loop over bodies reinitialize xyz: > > > for x in range(0,timesteps): > print > print "Step: ", x > print "------------------" > xyz=scipy.array([0,1,2]) #<------ reinit > for b in range(0,bodies): > # LCS[b] = q[xyz][x] > print "Body: ", b, " --- q",xyz,"[",x,"] =" > test = q[xyz][x] > print " q[xyz][x] = ", test > xyz = xyz + 7 # go to next body > > 2012/7/8 newsboost > > > >/ Test program: > />/ ------------------------------------ > />/ #!/usr/bin/python > />/ > />/ #import numpy as np > />/ import scipy > />/ import scipy.io > />/ import sys > />/ import numpy > />/ > />/ # load data > />/ timesteps = 2 > />/ bodies = 3 > />/ q = scipy.arange(0, bodies*7*timesteps).reshape(timesteps,bodies*7).T > />/ print "type(q) = ", type(q) > />/ print "q.shape =", q.shape > />/ print "q = " > />/ print q > />/ print > />/ > />/ LCS=scipy.zeros((bodies,3)) > />/ xyz=scipy.array([0,1,2]) > />/ > />/ for x in range(0,timesteps): > />/ print > />/ print "Step: ", x > />/ print "------------------" > />/ > />/ for b in range(0,bodies): > />/ # LCS[b] = q[xyz][x] > />/ print "Body: ", b, " --- q",xyz,"[",x,"] =" > />/ test = q[xyz][x] > />/ print " q[xyz][x] = ", test > />/ xyz = xyz + 7 # go to next body > />/ ------------------------------------ > />/ > />/ Output: > />/ =============== > />/ $ ./test.py > />/ type(q) = > />/ q.shape = (21, 2) > />/ q = > />/ [[ 0 21] > />/ [ 1 22] > />/ [ 2 23] > />/ [ 3 24] > />/ [ 4 25] > />/ [ 5 26] > />/ [ 6 27] > />/ [ 7 28] > />/ [ 8 29] > />/ [ 9 30] > />/ [10 31] > />/ [11 32] > />/ [12 33] > />/ [13 34] > />/ [14 35] > />/ [15 36] > />/ [16 37] > />/ [17 38] > />/ [18 39] > />/ [19 40] > />/ [20 41]] > />/ > />/ > />/ Step: 0 > />/ ------------------ > />/ Body: 0 --- q [0 1 2] [ 0 ] = > />/ q[xyz][x] = [ 0 21] > />/ Body: 1 --- q [7 8 9] [ 0 ] = > />/ q[xyz][x] = [ 7 28] > />/ Body: 2 --- q [14 15 16] [ 0 ] = > />/ q[xyz][x] = [14 35] > />/ > />/ Step: 1 > />/ ------------------ > />/ Body: 0 --- q [21 22 23] [ 1 ] = > />/ Traceback (most recent call last): > />/ File "./test.py", line 30, in > />/ test = q[xyz][x] > />/ IndexError: index (21) out of range (0<=index<20) in dimension 0 > />/ =============== > />/ > />/ What I want: > />/ ************ > />/ Step: 0 (using data from the 1.column) > />/ ------------------ > />/ Body: 0 --- q [0 1 2] [ 0 ] = > />/ q[xyz][x] = [ 0 1 2] > />/ Body: 1 --- q [7 8 9] [ 0 ] = > />/ q[xyz][x] = [ 7 8 9] > />/ Body: 2 --- q [14 15 16] [ 0 ] = > />/ q[xyz][x] = [14 15 16] > />/ > />/ Step: 1 (etc. as above, but now using the 2. column) > />/ ------------------ > />/ Body: 0 --- q [0 1 2] [ 1 ] = > />/ q[xyz][x] = [ 21 22 23] > />/ Body: 1 --- q [7 8 9] [ 1 ] = > />/ q[xyz][x] = [ 28 29 30] > />/ etc. > />/ ************ > />/ > />/ Hope you understand... It's a noob question, I know... I just cannot > />/ make it right... > />/ > />/ Thanks... > />// -------------- next part -------------- An HTML attachment was scrubbed... URL: From guziy.sasha at gmail.com Sun Jul 8 12:27:39 2012 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Sun, 8 Jul 2012 12:27:39 -0400 Subject: [SciPy-User] noob question about slicing / extracting numbers from array... In-Reply-To: <4FF9B205.1040703@gmail.com> References: <4FF9B021.70304@gmail.com> <4FF9B205.1040703@gmail.com> Message-ID: Hi newsboost, sorry, I thought you were asking why you are getting the exception : out of bounds. You have to reinitialize xyz at the beginning of each time step if you want to iterate through the same bodies on each time step. In order to get what you need you write as follows test = q[xyz, x]. please, see the attached script, which works as follows on my system: >>> type(q) = q.shape = (21, 2) q = [[ 0 21] [ 1 22] [ 2 23] [ 3 24] [ 4 25] [ 5 26] [ 6 27] [ 7 28] [ 8 29] [ 9 30] [10 31] [11 32] [12 33] [13 34] [14 35] [15 36] [16 37] [17 38] [18 39] [19 40] [20 41]] Step: 0 ------------------ Body: 0 --- q [0 1 2] [ 0 ] = q[xyz][x] = [0 1 2] Body: 1 --- q [7 8 9] [ 0 ] = q[xyz][x] = [7 8 9] Body: 2 --- q [14 15 16] [ 0 ] = q[xyz][x] = [14 15 16] Step: 1 ------------------ Body: 0 --- q [0 1 2] [ 1 ] = q[xyz][x] = [21 22 23] Body: 1 --- q [7 8 9] [ 1 ] = q[xyz][x] = [28 29 30] Body: 2 --- q [14 15 16] [ 1 ] = q[xyz][x] = [35 36 37] >>> Cheers -- Oleksandr (Sasha) Huziy 2012/7/8 newsboost > Uh, sorry - this has exactly the same behaviour on my system as before... > Why should it help to re-init xyz in your opinion in the x-loop? > > My problem is that I want to process all columns (only two here) one at a > time and then I want to extract 3 numbers for each body. Now I only get 2 > numbers... Are you familiar with the Matlab notation: > > q(xyz, x) ??? > > That is exactly what I want... 3 numbers from each x-column... That's why > I tried: q[xyz][x] > It seems like I'm getting two numbers from the first value (of the > 3-vector xyz, so the last two numbers in xyz is not used) which corresponds > to the row... But what I want is 3 numbers (those from xyz) at the current > x-column... > > Sorry, I don't think your suggestion worked. Look forward to hear other > suggestions. Thanks. > > > On 07/08/2012 06:06 PM, newsboost wrote: > > *Oleksandr Huziy* guziy.sasha at gmail.... > *Sun Jul 8 09:29:56 CDT 2012* > ------------------------------ > > Before the loop over bodies reinitialize xyz: > > > for x in range(0,timesteps): > print > print "Step: ", x > print "------------------" > xyz=scipy.array([0,1,2]) #<------ reinit > for b in range(0,bodies): > # LCS[b] = q[xyz][x] > print "Body: ", b, " --- q",xyz,"[",x,"] =" > test = q[xyz][x] > print " q[xyz][x] = ", test > xyz = xyz + 7 # go to next body > > 2012/7/8 newsboost > > > >* Test program:*>* ------------------------------------*>* #!/usr/bin/python*>**>* #import numpy as np*>* import scipy*>* import scipy.io*>* import sys*>* import numpy*>**>* # load data*>* timesteps = 2*>* bodies = 3*>* q = scipy.arange(0, bodies*7*timesteps).reshape(timesteps,bodies*7).T*>* print "type(q) = ", type(q)*>* print "q.shape =", q.shape*>* print "q = "*>* print q*>* print*>**>* LCS=scipy.zeros((bodies,3))*>* xyz=scipy.array([0,1,2])*>**>* for x in range(0,timesteps):*>* print*>* print "Step: ", x*>* print "------------------"*>**>* for b in range(0,bodies):*>* # LCS[b] = q[xyz][x]*>* print "Body: ", b, " --- q",xyz,"[",x,"] ="*>* test = q[xyz][x]*>* print " q[xyz][x] = ", test*>* xyz = xyz + 7 # go to next body*>* ------------------------------------*>**>* Output:*>* ===============*>* $ ./test.py*>* type(q) = *>* q.shape = (21, 2)*>* q =*>* [[ 0 21]*>* [ 1 22]*>* [ 2 23]*>* [ 3 24]*>* [ 4 25]*>* [ 5 26]*>* [ 6 27]*>* [ 7 28]*>* [ 8 29]*>* [ 9 30]*>* [10 31]*>* [11 32]*>* [12 33]*>* [13 34]*>* [14 35]*>* [15 36]*>* [16 37]*>* [17 38]*>* [18 39]*>* [19 40]*>* [20 41]]*>**>**>* Step: 0*>* ------------------*>* Body: 0 --- q [0 1 2] [ 0 ] =*>* q[xyz][x] = [ 0 21]*>* Body: 1 --- q [7 8 9] [ 0 ] =*>* q[xyz][x] = [ 7 28]*>* Body: 2 --- q [14 15 16] [ 0 ] =*>* q[xyz][x] = [14 35]*>**>* Step: 1*>* ------------------*>* Body: 0 --- q [21 22 23] [ 1 ] =*>* Traceback (most recent call last):*>* File "./test.py", line 30, in *>* test = q[xyz][x]*>* IndexError: index (21) out of range (0<=index<20) in dimension 0*>* ===============*>**>* What I want:*>* *************>* Step: 0 (using data from the 1.column)*>* ------------------*>* Body: 0 --- q [0 1 2] [ 0 ] =*>* q[xyz][x] = [ 0 1 2]*>* Body: 1 --- q [7 8 9] [ 0 ] =*>* q[xyz][x] = [ 7 8 9]*>* Body: 2 --- q [14 15 16] [ 0 ] =*>* q[xyz][x] = [14 15 16]*>**>* Step: 1 (etc. as above, but now using the 2. column)*>* ------------------*>* Body: 0 --- q [0 1 2] [ 1 ] =*>* q[xyz][x] = [ 21 22 23]*>* Body: 1 --- q [7 8 9] [ 1 ] =*>* q[xyz][x] = [ 28 29 30]*>* etc.*>* *************>**>* Hope you understand... It's a noob question, I know... I just cannot*>* make it right...*>**>* Thanks...*>** > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test.py Type: application/octet-stream Size: 643 bytes Desc: not available URL: From michf at post.tau.ac.il Sun Jul 8 12:36:14 2012 From: michf at post.tau.ac.il (michf at post.tau.ac.il) Date: Sun, 08 Jul 2012 19:36:14 +0300 Subject: [SciPy-User] missing lapack functions in scipy.linalg.flapack Message-ID: <20120708193614.Horde.x6UCI5tDWw9P_bb_hVh2mmA@webmail.tau.ac.il> I'm trying to access a few lapack functions that don't seem to be available directly in scipy as far as I can tell. Specifically I need at the moment ?trsyl (solve triagonal sylvester equation) and the various subfunctions for computing eigenvectors (the whole stack used behind ?geev) I'm currently using the 64bit windows distribution from enthought (academic version). Any ideas? Thanks From newsboost at gmail.com Sun Jul 8 13:33:18 2012 From: newsboost at gmail.com (newsboost) Date: Sun, 08 Jul 2012 19:33:18 +0200 Subject: [SciPy-User] noob question about slicing / extracting numbers from array... In-Reply-To: References: <4FF9B021.70304@gmail.com> <4FF9B205.1040703@gmail.com> Message-ID: <4FF9C45E.8000103@gmail.com> Hi Oleksandr, Ah, thank you very much - I wasn't even focused on the exception - thanks for correcting that too. It's quite weird to me that it has to be re-initialized in the loop, like if one can only write to it one single time (and not overwrite it) ? That'll probably take some time for me to feel comfortable with. So the mistake I did was only to use q[xyz, x] instead of q[xyz][x] - that's very nice - thank you very much for that. This I should be able to get familiar with pretty quick, I hope... Now I can continue on my program - thanks for quick answer(s) ! On 07/08/2012 06:27 PM, Oleksandr Huziy wrote: > Hi newsboost, > > sorry, I thought you were asking why you are getting the exception : > out of bounds. > You have to reinitialize xyz at the beginning of each time step if you > want to iterate through the same bodies on each time step. > In order to get what you need you write as follows test = q[xyz, x]. > > please, see the attached script, which works as follows on my system: > >>> > type(q) = > q.shape = (21, 2) > q = > [[ 0 21] > [ 1 22] > [ 2 23] > [ 3 24] > [ 4 25] > [ 5 26] > [ 6 27] > [ 7 28] > [ 8 29] > [ 9 30] > [10 31] > [11 32] > [12 33] > [13 34] > [14 35] > [15 36] > [16 37] > [17 38] > [18 39] > [19 40] > [20 41]] > > > Step: 0 > ------------------ > Body: 0 --- q [0 1 2] [ 0 ] = > q[xyz][x] = [0 1 2] > Body: 1 --- q [7 8 9] [ 0 ] = > q[xyz][x] = [7 8 9] > Body: 2 --- q [14 15 16] [ 0 ] = > q[xyz][x] = [14 15 16] > > Step: 1 > ------------------ > Body: 0 --- q [0 1 2] [ 1 ] = > q[xyz][x] = [21 22 23] > Body: 1 --- q [7 8 9] [ 1 ] = > q[xyz][x] = [28 29 30] > Body: 2 --- q [14 15 16] [ 1 ] = > q[xyz][x] = [35 36 37] > >>> > > Cheers > -- > Oleksandr (Sasha) Huziy > > > > > > 2012/7/8 newsboost > > > Uh, sorry - this has exactly the same behaviour on my system as > before... > Why should it help to re-init xyz in your opinion in the x-loop? > > My problem is that I want to process all columns (only two here) > one at a time and then I want to extract 3 numbers for each body. > Now I only get 2 numbers... Are you familiar with the Matlab notation: > > q(xyz, x) ??? > > That is exactly what I want... 3 numbers from each x-column... > That's why I tried: q[xyz][x] > It seems like I'm getting two numbers from the first value (of the > 3-vector xyz, so the last two numbers in xyz is not used) which > corresponds to the row... But what I want is 3 numbers (those from > xyz) at the current x-column... > > Sorry, I don't think your suggestion worked. Look forward to hear > other suggestions. Thanks. > > > On 07/08/2012 06:06 PM, newsboost wrote: >> *Oleksandr >> Huziy*guziy.sasha at gmail.... >> /Sun Jul 8 09:29:56 CDT 2012/ >> ------------------------------------------------------------------------ >> Before the loop over bodies reinitialize xyz: >> >> >> for x in range(0,timesteps): >> print >> print "Step: ", x >> print "------------------" >> xyz=scipy.array([0,1,2]) #<------ reinit >> for b in range(0,bodies): >> # LCS[b] = q[xyz][x] >> print "Body: ", b, " --- q",xyz,"[",x,"] =" >> test = q[xyz][x] >> print " q[xyz][x] = ", test >> xyz = xyz + 7 # go to next body >> >> 2012/7/8 newsboost > >> >> >/ Test program: >> />/ ------------------------------------ >> />/ #!/usr/bin/python >> />/ >> />/ #import numpy as np >> />/ import scipy >> />/ importscipy.io >> />/ import sys >> />/ import numpy >> />/ >> />/ # load data >> />/ timesteps = 2 >> />/ bodies = 3 >> />/ q = scipy.arange(0, bodies*7*timesteps).reshape(timesteps,bodies*7).T >> />/ print "type(q) = ", type(q) >> />/ print "q.shape =", q.shape >> />/ print "q = " >> />/ print q >> />/ print >> />/ >> />/ LCS=scipy.zeros((bodies,3)) >> />/ xyz=scipy.array([0,1,2]) >> />/ >> />/ for x in range(0,timesteps): >> />/ print >> />/ print "Step: ", x >> />/ print "------------------" >> />/ >> />/ for b in range(0,bodies): >> />/ # LCS[b] = q[xyz][x] >> />/ print "Body: ", b, " --- q",xyz,"[",x,"] =" >> />/ test = q[xyz][x] >> />/ print " q[xyz][x] = ", test >> />/ xyz = xyz + 7 # go to next body >> />/ ------------------------------------ >> />/ >> />/ Output: >> />/ =============== >> />/ $ ./test.py >> />/ type(q) = >> />/ q.shape = (21, 2) >> />/ q = >> />/ [[ 0 21] >> />/ [ 1 22] >> />/ [ 2 23] >> />/ [ 3 24] >> />/ [ 4 25] >> />/ [ 5 26] >> />/ [ 6 27] >> />/ [ 7 28] >> />/ [ 8 29] >> />/ [ 9 30] >> />/ [10 31] >> />/ [11 32] >> />/ [12 33] >> />/ [13 34] >> />/ [14 35] >> />/ [15 36] >> />/ [16 37] >> />/ [17 38] >> />/ [18 39] >> />/ [19 40] >> />/ [20 41]] >> />/ >> />/ >> />/ Step: 0 >> />/ ------------------ >> />/ Body: 0 --- q [0 1 2] [ 0 ] = >> />/ q[xyz][x] = [ 0 21] >> />/ Body: 1 --- q [7 8 9] [ 0 ] = >> />/ q[xyz][x] = [ 7 28] >> />/ Body: 2 --- q [14 15 16] [ 0 ] = >> />/ q[xyz][x] = [14 35] >> />/ >> />/ Step: 1 >> />/ ------------------ >> />/ Body: 0 --- q [21 22 23] [ 1 ] = >> />/ Traceback (most recent call last): >> />/ File "./test.py", line 30, in >> />/ test = q[xyz][x] >> />/ IndexError: index (21) out of range (0<=index<20) in dimension 0 >> />/ =============== >> />/ >> />/ What I want: >> />/ ************ >> />/ Step: 0 (using data from the 1.column) >> />/ ------------------ >> />/ Body: 0 --- q [0 1 2] [ 0 ] = >> />/ q[xyz][x] = [ 0 1 2] >> />/ Body: 1 --- q [7 8 9] [ 0 ] = >> />/ q[xyz][x] = [ 7 8 9] >> />/ Body: 2 --- q [14 15 16] [ 0 ] = >> />/ q[xyz][x] = [14 15 16] >> />/ >> />/ Step: 1 (etc. as above, but now using the 2. column) >> />/ ------------------ >> />/ Body: 0 --- q [0 1 2] [ 1 ] = >> />/ q[xyz][x] = [ 21 22 23] >> />/ Body: 1 --- q [7 8 9] [ 1 ] = >> />/ q[xyz][x] = [ 28 29 30] >> />/ etc. >> />/ ************ >> />/ >> />/ Hope you understand... It's a noob question, I know... I just cannot >> />/ make it right... >> />/ >> />/ Thanks... >> /> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun Jul 8 13:52:52 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 8 Jul 2012 19:52:52 +0200 Subject: [SciPy-User] missing lapack functions in scipy.linalg.flapack In-Reply-To: <20120708193614.Horde.x6UCI5tDWw9P_bb_hVh2mmA@webmail.tau.ac.il> References: <20120708193614.Horde.x6UCI5tDWw9P_bb_hVh2mmA@webmail.tau.ac.il> Message-ID: On Sun, Jul 8, 2012 at 6:36 PM, wrote: > I'm trying to access a few lapack functions that don't seem to be > available directly in scipy as far as I can tell. They are. trsyl was added recently (for 0.11), geev should be in older versions too. In [6]: linalg.flapack.dtrsyl Out[6]: In [7]: linalg.flapack.dgeev Out[7]: In [8]: linalg.solve_sylvester Out[8]: Ralf > Specifically I need > at the moment ?trsyl (solve triagonal sylvester equation) and the > various subfunctions for computing eigenvectors (the whole stack used > behind ?geev) > > I'm currently using the 64bit windows distribution from enthought > (academic version). > > Any ideas? > > Thanks > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michf at post.tau.ac.il Sun Jul 8 15:43:09 2012 From: michf at post.tau.ac.il (michf at post.tau.ac.il) Date: Sun, 08 Jul 2012 22:43:09 +0300 Subject: [SciPy-User] missing lapack functions in scipy.linalg.flapack In-Reply-To: References: <20120708193614.Horde.x6UCI5tDWw9P_bb_hVh2mmA@webmail.tau.ac.il> Message-ID: <20120708224309.Horde.vuXLFJtDWw9P_eLN5dQgP0A@webmail.tau.ac.il> It seems that ensight is using scipy 0.10.1 and not 0.11, and a version of 0.11 for 64 bit windows doesn't seem to be available. Under 0.10 solve_sylvester seems to be missing, and trsyl as well. As for eigenvalues, I was actually looking for trevc and hsein. It seems that wrappers for these are not implemented, I'm looking on how to do it myself, although I'm not sure. There is some information regarding calling mkl directly using ctypes, I'll see if I can get that to work. Thanks. Quoting Ralf Gommers : > On Sun, Jul 8, 2012 at 6:36 PM, wrote: > >> I'm trying to access a few lapack functions that don't seem to be >> available directly in scipy as far as I can tell. > > > They are. trsyl was added recently (for 0.11), geev should be in older > versions too. > > In [6]: linalg.flapack.dtrsyl > Out[6]: > > In [7]: linalg.flapack.dgeev > Out[7]: > > In [8]: linalg.solve_sylvester > Out[8]: > > Ralf > > > >> Specifically I need >> at the moment ?trsyl (solve triagonal sylvester equation) and the >> various subfunctions for computing eigenvectors (the whole stack used >> behind ?geev) >> >> I'm currently using the 64bit windows distribution from enthought >> (academic version). >> >> Any ideas? >> >> Thanks >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> From emmanuelle.gouillart at normalesup.org Tue Jul 10 04:09:56 2012 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Tue, 10 Jul 2012 10:09:56 +0200 Subject: [SciPy-User] Euroscipy 2012: early bird registration ending soon Message-ID: <20120710080956.GA23990@phare.normalesup.org> Hello, early bird registration for Euroscipy 2012 is soon coming to an end, with the deadline on July 22nd. Don't forget to register soon! Reduced fees are available for academics, students and speakers. Registration takes place online on http://www.euroscipy.org/conference/euroscipy2012. Euroscipy 2012 is the annual European conference for scientists using the Python language. It will be held August 23-27 2012 in Brussels, Belgium. Our program has been online for a few weeks now: we're very excited to have a great selection of tutorials, as well as talks and poster presentations. During the two days of tutorials (August 23 and 24), it will possible to attend either the introduction track, or the advanced track, or to combine both tracks (see http://www.euroscipy.org/track/6538?tab=tracktalkslist and http://www.euroscipy.org/track/6539?tab=tracktalkslist). As for the highlights of the two days of conference (August 25 and 26), we are very happy to have David Beazley (http://www.dabeaz.com) and Eric Jones (http://www.enthought.com/company/support-team.php) as our keynote speakers. The list of talks is available on http://www.euroscipy.org/track/6540?tab=tracktalkslist, with subjects ranging from extension programming to machine learning, or cellular biology. We're looking forward to exciting discussions during the talk sessions or around the posters, as happened during the previous editions of Euroscipy! Sprints may be organized at the conference venue during the days following the conference, from Monday 27th on. Since there is a limited number of rooms booked for the sprints, please contact the organizers by July 22 if you intend to organize one. Two sprints are already planned by the scikit-learn and the scikits-image teams. The EuroSciPy 2012 conference will feature a best talk, a best poster and a jury award. All conference participants will be given the opportunity to cast a vote for the best talk and best poster awards while the jury award is selected by the members of the program committee. Each prize consists of a Commercial Use license for Wing IDE Professional, an integrated development environment designed specifically for Python programmers. The licenses are generously donated by Wingware. Financial support may be granted by Numfocus to a small number of eligible students. See http://www.euroscipy.org/card/euroscipy2012_support_numfocus for more details on how to apply. For information that cannot be found on the conference website, please contact the organizing team at org-team at lists.euroscipy.org Cheers, Emmanuelle, for the organizing team From ralf.gommers at googlemail.com Tue Jul 10 08:00:34 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 10 Jul 2012 14:00:34 +0200 Subject: [SciPy-User] missing lapack functions in scipy.linalg.flapack In-Reply-To: <20120708224309.Horde.vuXLFJtDWw9P_eLN5dQgP0A@webmail.tau.ac.il> References: <20120708193614.Horde.x6UCI5tDWw9P_bb_hVh2mmA@webmail.tau.ac.il> <20120708224309.Horde.vuXLFJtDWw9P_eLN5dQgP0A@webmail.tau.ac.il> Message-ID: On Sun, Jul 8, 2012 at 9:43 PM, wrote: > It seems that ensight is using scipy 0.10.1 and not 0.11, and a > version of 0.11 for 64 bit windows doesn't seem to be available. > > Under 0.10 solve_sylvester seems to be missing, and trsyl as well. > > As for eigenvalues, I was actually looking for trevc and hsein. > > It seems that wrappers for these are not implemented, I'm looking on > how to do it myself, although I'm not sure. There is some information > regarding calling mkl directly using ctypes, I'll see if I can get > that to work. > If you want to wrap then in a way that's reusable in scipy, then using f2py would be the way to go. For examples, see scipy/linalg/flapack.pyf.src Ralf > Thanks. > > Quoting Ralf Gommers : > > > On Sun, Jul 8, 2012 at 6:36 PM, wrote: > > > >> I'm trying to access a few lapack functions that don't seem to be > >> available directly in scipy as far as I can tell. > > > > > > They are. trsyl was added recently (for 0.11), geev should be in older > > versions too. > > > > In [6]: linalg.flapack.dtrsyl > > Out[6]: > > > > In [7]: linalg.flapack.dgeev > > Out[7]: > > > > In [8]: linalg.solve_sylvester > > Out[8]: > > > > Ralf > > > > > > > >> Specifically I need > >> at the moment ?trsyl (solve triagonal sylvester equation) and the > >> various subfunctions for computing eigenvectors (the whole stack used > >> behind ?geev) > >> > >> I'm currently using the 64bit windows distribution from enthought > >> (academic version). > >> > >> Any ideas? > >> > >> Thanks > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Jul 10 14:48:07 2012 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 10 Jul 2012 19:48:07 +0100 Subject: [SciPy-User] [ANN] Patsy 0.1.0, a python library for statistical formulas Message-ID: [Apologies for cross-posting. Please direct any replies to pydata at googlegroups.com.] I'm pleased to announce the first release of Patsy, a Python package for describing statistical models and building design matrices using "formulas". Patsy's formulas are inspired by and largely compatible with those used in R. Patsy makes it easy to quickly try different models and work with categorical data and interactions. (Note: Patsy was originally known as "Charlton" during development. Long story.) Patsy can be used directly by users to generate model matrices for passing to other libraries, or used by those libraries to offer a high-level formula interface. Patsy's goal is to become the standard format used by different Python statistical packages for specifying models, just as formulas are the standard interface used by R packages. While this is an initial release, we already have robust test coverage (>98% statement coverage), comprehensive documentation, and a number of advanced features. (We even correctly handle a few corner cases that R itself gets wrong.) For more information, see: Overview: http://patsy.readthedocs.org/en/latest/overview.html Quickstart: http://patsy.readthedocs.org/en/latest/quickstart.html How to integrate it into your library: http://patsy.readthedocs.org/en/latest/library-developers.html Downloads: http://pypi.python.org/pypi/patsy/ Source: https://github.com/pydata/patsy Share and enjoy, -n From marston at chalmers.se Tue Jul 10 20:04:13 2012 From: marston at chalmers.se (Marston Sheldon Johnston) Date: Wed, 11 Jul 2012 00:04:13 +0000 Subject: [SciPy-User] 2D bilinear interpolation from low to high resolution Message-ID: <40CB0580-E434-4304-A2C3-A49BE23CB529@chalmers.se> Hi, I've been trying to copy and modify a Scipy Cookbook function that does a 2d spline interpolation using the same data limits but with higher resolution. In my function I use a bilinear interpolation. I've struggled to get this working and now that I have the results are horrible and incorrect. Would love some help with this if anyone has some experience with this particular subject. I assume the lat and lon must be ascending, since it is not in the original data, I have moved the data around. I've tried this without sorting the lats and the results are the similar: garbled! The data is a slice of an array. The slice is centered at lat0 lon0 and is ?10 degrees on all 4 sides forming a 20x20 degree window. The initial resolution is 1.125 and the new resolution is 0.703125. The input dictionary and data is posted below. def Low2HighRes(plist,ivals): """N-D interpolation for equally-spaced data""" latsort = np.argsort(plist['xlats']) lonsort = np.argsort(plist['xlons']) ix,jy = np.meshgrid(plist['newx'],plist['newy']) coords = scipy.array([ix,jy]) vals = ivals[latsort,:] nvals = scipy.ndimage.map_coordinates(vals,coords,order=1,mode='nearest') return nvals The new resolution is 0.703125: 'newx': array([-9.84375 , -9.140625, -8.4375 , -7.734375, -7.03125 , -6.328125, -5.625 , -4.921875, -4.21875 , -3.515625, -2.8125 , -2.109375, -1.40625 , -0.703125, -0. , 0.703125, 1.40625 , 2.109375, 2.8125 , 3.515625, 4.21875 , 4.921875, 5.625 , 6.328125, 7.03125 , 7.734375, 8.4375 , 9.140625, 9.84375 ]) 'newy': array([-9.84375 , -9.140625, -8.4375 , -7.734375, -7.03125 , -6.328125, -5.625 , -4.921875, -4.21875 , -3.515625, -2.8125 , -2.109375, -1.40625 , -0.703125, -0. , 0.703125, 1.40625 , 2.109375, 2.8125 , 3.515625, 4.21875 , 4.921875, 5.625 , 6.328125, 7.03125 , 7.734375, 8.4375 , 9.140625, 9.84375 ]) original coordinates of the data to be interpolated are lats and lons with a 1.125 resolution: 'modx': array([-10.125, -9. , -7.875, -6.75 , -5.625, -4.5 , -3.375, -2.25 , -1.125, -0. , 1.125, 2.25 , 3.375, 4.5 , 5.625, 6.75 , 7.875, 9. , 10.125]) 'mody': array([-10.125, -9. , -7.875, -6.75 , -5.625, -4.5 , -3.375, -2.25 , -1.125, -0. , 1.125, 2.25 , 3.375, 4.5 , 5.625, 6.75 , 7.875, 9. , 10.125]) 'lats': array([ 18.50457985, 17.38309053, 16.26160115, 15.14011169, 14.01862218, 12.8971326 , 11.77564298, 10.65415331, 9.53266359, 8.41117384, 7.28968406, 6.16819425, 5.04670442, 3.92521457, 2.8037247 , 1.68223483, 0.56074494, -0.56074494, -1.68223483]), 'lons': array([ 229.5 , 230.625, 231.75 , 232.875, 234. , 235.125, 236.25 , 237.375, 238.5 , 239.625, 240.75 , 241.875, 243. , 244.125, 245.25 , 246.375, 247.5 , 248.625, 249.75 ]), The data: array([[ 2.91407923e-03, 3.39587848e-03, 1.21381157e-03, 3.29374452e-04, 7.41946205e-05, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 4.87387297e-05, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 3.60107806e-05, 1.17035036e-03, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], [ 2.40930752e-03, 4.95893275e-03, 2.63592694e-03, 1.06014498e-03, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.30197592e-03, 1.14706755e-02, 1.85952216e-04, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], [ 1.04772742e-03, 2.78711016e-03, 3.14659718e-03, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 6.02249238e-05, 4.03569084e-06, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.24175097e-06, 0.00000000e+00, 0.00000000e+00, 2.44314520e-04, 2.39037072e-05, 3.38377140e-05, 3.95187264e-04, 0.00000000e+00], [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 2.48350193e-06, 7.45050602e-06, 1.17966347e-05, 0.00000000e+00, 1.55218879e-06, 1.86262650e-06, 2.60457280e-04, 0.00000000e+00, 4.62552271e-05, 9.49318637e-04, 9.99299111e-04, 4.70123850e-02, 8.09109434e-02], [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.17966347e-05, 7.91616258e-05, 1.24175097e-06, 0.00000000e+00, 6.20875483e-07, 4.59447874e-05, 5.22777205e-04, 0.00000000e+00, 4.52431990e-03, 1.75440796e-02, 8.04008991e-02, 1.31909043e-01, 2.24047899e-01, 1.60426781e-01], [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 9.31313252e-07, 1.14861969e-05, 2.39968387e-04, 4.34612866e-06, 2.17306433e-06, 1.44353558e-04, 1.58664735e-03, 2.86217406e-02, 1.11849174e-01, 9.77677181e-02, 1.43859955e-02, 7.32049495e-02, 6.55011237e-02, 3.44568849e-01, 4.58411306e-01, 2.49592885e-01], [ 0.00000000e+00, 1.61427633e-05, 4.99183894e-04, 3.17937918e-02, 4.85810265e-02, 1.03917181e-01, 1.25399157e-01, 1.80325225e-01, 2.97502756e-01, 5.07521629e-01, 7.01690793e-01, 7.83944130e-01, 3.59588116e-01, 4.82781321e-01, 2.19913769e+00, 1.45968425e+00, 7.70302236e-01, 6.41377449e-01, 1.36474028e-01], [ 1.13744393e-03, 6.07185205e-03, 2.85994828e-01, 1.50236040e-01, 7.42231190e-01, 1.22813118e+00, 6.46776557e-01, 5.81912160e-01, 1.87843180e+00, 1.92207563e+00, 1.52547717e+00, 1.32381713e+00, 3.73675466e-01, 7.55523860e-01, 2.65545297e+00, 1.27209759e+00, 1.13974738e+00, 1.16467550e-01, 2.09415719e-01], [ 1.38720661e-01, 3.32264960e-01, 4.35250461e-01, 7.92550027e-01, 1.22075701e+00, 8.08324039e-01, 3.90933961e-01, 3.94844234e-01, 4.14039224e-01, 1.87421179e+00, 1.89138579e+00, 1.72843981e+00, 5.95286727e-01, 1.14342570e+00, 1.61977232e+00, 4.52039897e-01, 2.81218112e-01, 1.66340306e-01, 9.93605703e-02], [ 9.74169195e-01, 9.17763233e-01, 5.85691392e-01, 4.46551383e-01, 2.51251876e-01, 1.49007021e-02, 2.15577297e-02, 1.01625830e-01, 9.69323218e-02, 2.31756687e-01, 8.57786715e-01, 1.20130097e-02, 4.75826561e-02, 1.08127639e-01, 9.12918925e-01, 1.77337572e-01, 2.07490385e-01, 8.76067802e-02, 7.50075638e-01], [ 7.03467131e-01, 5.82458824e-02, 9.40253865e-03, 5.74371964e-03, 2.25005276e-03, 5.12843195e-04, 3.22482735e-03, 6.15287630e-04, 0.00000000e+00, 0.00000000e+00, 3.08016338e-03, 7.09070871e-03, 1.34970576e-01, 2.69089937e-01, 5.28816462e-01, 7.75388122e-01, 9.10060704e-02, 3.86802018e-01, 5.62999070e-01], [ 1.45083088e-02, 1.42366753e-03, 2.19852012e-03, 8.08690325e-04, 2.21124804e-03, 1.78905274e-03, 1.85610738e-03, 2.66821240e-03, 1.23740488e-03, 7.45671510e-04, 1.78346480e-03, 6.83180336e-03, 2.30602473e-02, 6.35242537e-02, 5.68188019e-02, 7.31670763e-03, 6.97802007e-03, 1.26199163e-02, 2.35131755e-02], [ 6.02249238e-05, 2.92432378e-04, 1.13030383e-03, 5.85175178e-04, 1.46557658e-03, 1.88808236e-03, 3.10065225e-03, 3.55264964e-03, 4.15862398e-03, 7.17980415e-03, 4.59075347e-03, 2.15481054e-02, 4.47589159e-03, 6.03925623e-03, 9.55558382e-03, 1.08743245e-02, 4.59385756e-03, 6.76350715e-03, 1.05188731e-02], [ 1.71982509e-04, 7.66781232e-05, 2.50212825e-03, 4.58050938e-03, 3.75257153e-03, 3.77368135e-03, 3.04756756e-03, 4.28559305e-03, 1.21471193e-02, 1.85669716e-02, 2.26231515e-02, 2.37028543e-02, 7.72741623e-03, 1.21803358e-02, 7.17204344e-03, 1.18307828e-03, 9.51678026e-03, 8.88597034e-03, 1.38688069e-02], [ 7.85935298e-03, 1.02338903e-02, 2.00772509e-02, 1.29666738e-02, 8.32718238e-03, 7.63056008e-03, 6.14076946e-03, 9.90855228e-03, 1.00889169e-02, 1.48749352e-02, 1.88460555e-02, 8.26913025e-03, 9.51088127e-03, 7.60696689e-03, 1.25758338e-03, 4.51376487e-04, 1.18463056e-03, 2.09514447e-03, 2.51206243e-03], [ 2.33238097e-02, 2.73073465e-02, 3.22693810e-02, 1.88162532e-02, 5.83653990e-03, 3.47224623e-03, 1.80892076e-03, 9.19796061e-03, 8.73230398e-03, 1.45570468e-02, 1.55864581e-02, 1.42422626e-02, 1.35748219e-02, 1.19139794e-02, 3.64764361e-03, 5.92625700e-04, 2.55024619e-03, 1.44043122e-03, 1.51959271e-03], [ 4.65045050e-02, 4.04065773e-02, 2.90905014e-02, 1.47327557e-02, 6.68620877e-03, 5.12005016e-03, 4.09746822e-03, 5.41186146e-03, 1.08640799e-02, 1.43946875e-02, 1.87681355e-02, 2.52960213e-02, 1.79439243e-02, 1.72994547e-02, 7.33191893e-03, 4.66867350e-03, 1.00023043e-03, 9.08961752e-04, 2.14481447e-03], [ 7.23621100e-02, 4.10851948e-02, 8.70250165e-03, 5.37367724e-03, 2.11873767e-03, 3.15994583e-03, 2.98547978e-03, 5.60216000e-03, 5.73099125e-03, 9.40564275e-03, 1.04456097e-02, 1.35018695e-02, 2.56229118e-02, 2.75842566e-02, 2.80958600e-02, 3.02949995e-02, 4.40449081e-03, 1.03189505e-03, 1.60216924e-03]], dtype=float32) Appreciate any help with getting this function to work. /M From lists at hilboll.de Wed Jul 11 04:25:55 2012 From: lists at hilboll.de (Andreas Hilboll) Date: Wed, 11 Jul 2012 10:25:55 +0200 Subject: [SciPy-User] 2D bilinear interpolation from low to high resolution In-Reply-To: <40CB0580-E434-4304-A2C3-A49BE23CB529@chalmers.se> References: <40CB0580-E434-4304-A2C3-A49BE23CB529@chalmers.se> Message-ID: > Hi, > > I've been trying to copy and modify a Scipy Cookbook function that does a > 2d spline interpolation using the same data limits but with higher > resolution. In my function I use a bilinear interpolation. > I've struggled to get this working and now that I have the results are > horrible and incorrect. Would love some help with this if anyone has some > experience with this particular subject. > I assume the lat and lon must be ascending, since it is not in the > original data, I have moved the data around. I've tried this without > sorting the lats and the results are the similar: garbled! The data is a > slice of an array. The slice is centered at lat0 lon0 and is ?10 degrees > on all 4 sides forming a 20x20 degree window. The initial resolution is > 1.125 and the new resolution is 0.703125. > > The input dictionary and data is posted below. > > def Low2HighRes(plist,ivals): > """N-D interpolation for equally-spaced data""" > latsort = np.argsort(plist['xlats']) > lonsort = np.argsort(plist['xlons']) > ix,jy = np.meshgrid(plist['newx'],plist['newy']) > coords = scipy.array([ix,jy]) > vals = ivals[latsort,:] > nvals = > scipy.ndimage.map_coordinates(vals,coords,order=1,mode='nearest') > return nvals > > The new resolution is 0.703125: > 'newx': array([-9.84375 , -9.140625, -8.4375 , -7.734375, -7.03125 , > -6.328125, > -5.625 , -4.921875, -4.21875 , -3.515625, > -2.8125 , -2.109375, > -1.40625 , -0.703125, -0. , 0.703125, > 1.40625 , 2.109375, > 2.8125 , 3.515625, 4.21875 , 4.921875, 5.625 > , 6.328125, > 7.03125 , 7.734375, 8.4375 , 9.140625, > 9.84375 ]) > > 'newy': array([-9.84375 , -9.140625, -8.4375 , -7.734375, -7.03125 , > -6.328125, > -5.625 , -4.921875, -4.21875 , -3.515625, > -2.8125 , -2.109375, > -1.40625 , -0.703125, -0. , 0.703125, > 1.40625 , 2.109375, > 2.8125 , 3.515625, 4.21875 , 4.921875, 5.625 > , 6.328125, > 7.03125 , 7.734375, 8.4375 , 9.140625, > 9.84375 ]) > > original coordinates of the data to be interpolated are lats and lons with > a 1.125 resolution: > 'modx': array([-10.125, -9. , -7.875, -6.75 , -5.625, -4.5 , > -3.375, > -2.25 , -1.125, -0. , 1.125, 2.25 , > 3.375, 4.5 , > 5.625, 6.75 , 7.875, 9. , 10.125]) > 'mody': array([-10.125, -9. , -7.875, -6.75 , -5.625, -4.5 , > -3.375, > -2.25 , -1.125, -0. , 1.125, 2.25 , > 3.375, 4.5 , > 5.625, 6.75 , 7.875, 9. , 10.125]) > > 'lats': array([ 18.50457985, 17.38309053, 16.26160115, 15.14011169, > 14.01862218, 12.8971326 , 11.77564298, > 10.65415331, > 9.53266359, 8.41117384, 7.28968406, > 6.16819425, > 5.04670442, 3.92521457, 2.8037247 , > 1.68223483, > 0.56074494, -0.56074494, -1.68223483]), > 'lons': array([ 229.5 , 230.625, 231.75 , 232.875, 234. , 235.125, > 236.25 , 237.375, 238.5 , 239.625, 240.75 , > 241.875, > 243. , 244.125, 245.25 , 246.375, 247.5 , > 248.625, 249.75 ]), > > The data: > > array([[ 2.91407923e-03, 3.39587848e-03, 1.21381157e-03, > 3.29374452e-04, 7.41946205e-05, 0.00000000e+00, > 0.00000000e+00, 0.00000000e+00, 4.87387297e-05, > 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, > 3.60107806e-05, 1.17035036e-03, 0.00000000e+00, > 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, > 0.00000000e+00], > [ 2.40930752e-03, 4.95893275e-03, 2.63592694e-03, > 1.06014498e-03, 0.00000000e+00, 0.00000000e+00, > 0.00000000e+00, 1.30197592e-03, 1.14706755e-02, > 1.85952216e-04, 0.00000000e+00, 0.00000000e+00, > 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, > 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, > 0.00000000e+00], > [ 1.04772742e-03, 2.78711016e-03, 3.14659718e-03, > 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, > 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, > 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, > 0.00000000e+00, 6.02249238e-05, 4.03569084e-06, > 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, > 0.00000000e+00], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, > 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, > 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, > 0.00000000e+00, 0.00000000e+00, 1.24175097e-06, > 0.00000000e+00, 0.00000000e+00, 2.44314520e-04, > 2.39037072e-05, 3.38377140e-05, 3.95187264e-04, > 0.00000000e+00], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, > 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, > 2.48350193e-06, 7.45050602e-06, 1.17966347e-05, > 0.00000000e+00, 1.55218879e-06, 1.86262650e-06, > 2.60457280e-04, 0.00000000e+00, 4.62552271e-05, > 9.49318637e-04, 9.99299111e-04, 4.70123850e-02, > 8.09109434e-02], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, > 0.00000000e+00, 0.00000000e+00, 1.17966347e-05, > 7.91616258e-05, 1.24175097e-06, 0.00000000e+00, > 6.20875483e-07, 4.59447874e-05, 5.22777205e-04, > 0.00000000e+00, 4.52431990e-03, 1.75440796e-02, > 8.04008991e-02, 1.31909043e-01, 2.24047899e-01, > 1.60426781e-01], > [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, > 9.31313252e-07, 1.14861969e-05, 2.39968387e-04, > 4.34612866e-06, 2.17306433e-06, 1.44353558e-04, > 1.58664735e-03, 2.86217406e-02, 1.11849174e-01, > 9.77677181e-02, 1.43859955e-02, 7.32049495e-02, > 6.55011237e-02, 3.44568849e-01, 4.58411306e-01, > 2.49592885e-01], > [ 0.00000000e+00, 1.61427633e-05, 4.99183894e-04, > 3.17937918e-02, 4.85810265e-02, 1.03917181e-01, > 1.25399157e-01, 1.80325225e-01, 2.97502756e-01, > 5.07521629e-01, 7.01690793e-01, 7.83944130e-01, > 3.59588116e-01, 4.82781321e-01, 2.19913769e+00, > 1.45968425e+00, 7.70302236e-01, 6.41377449e-01, > 1.36474028e-01], > [ 1.13744393e-03, 6.07185205e-03, 2.85994828e-01, > 1.50236040e-01, 7.42231190e-01, 1.22813118e+00, > 6.46776557e-01, 5.81912160e-01, 1.87843180e+00, > 1.92207563e+00, 1.52547717e+00, 1.32381713e+00, > 3.73675466e-01, 7.55523860e-01, 2.65545297e+00, > 1.27209759e+00, 1.13974738e+00, 1.16467550e-01, > 2.09415719e-01], > [ 1.38720661e-01, 3.32264960e-01, 4.35250461e-01, > 7.92550027e-01, 1.22075701e+00, 8.08324039e-01, > 3.90933961e-01, 3.94844234e-01, 4.14039224e-01, > 1.87421179e+00, 1.89138579e+00, 1.72843981e+00, > 5.95286727e-01, 1.14342570e+00, 1.61977232e+00, > 4.52039897e-01, 2.81218112e-01, 1.66340306e-01, > 9.93605703e-02], > [ 9.74169195e-01, 9.17763233e-01, 5.85691392e-01, > 4.46551383e-01, 2.51251876e-01, 1.49007021e-02, > 2.15577297e-02, 1.01625830e-01, 9.69323218e-02, > 2.31756687e-01, 8.57786715e-01, 1.20130097e-02, > 4.75826561e-02, 1.08127639e-01, 9.12918925e-01, > 1.77337572e-01, 2.07490385e-01, 8.76067802e-02, > 7.50075638e-01], > [ 7.03467131e-01, 5.82458824e-02, 9.40253865e-03, > 5.74371964e-03, 2.25005276e-03, 5.12843195e-04, > 3.22482735e-03, 6.15287630e-04, 0.00000000e+00, > 0.00000000e+00, 3.08016338e-03, 7.09070871e-03, > 1.34970576e-01, 2.69089937e-01, 5.28816462e-01, > 7.75388122e-01, 9.10060704e-02, 3.86802018e-01, > 5.62999070e-01], > [ 1.45083088e-02, 1.42366753e-03, 2.19852012e-03, > 8.08690325e-04, 2.21124804e-03, 1.78905274e-03, > 1.85610738e-03, 2.66821240e-03, 1.23740488e-03, > 7.45671510e-04, 1.78346480e-03, 6.83180336e-03, > 2.30602473e-02, 6.35242537e-02, 5.68188019e-02, > 7.31670763e-03, 6.97802007e-03, 1.26199163e-02, > 2.35131755e-02], > [ 6.02249238e-05, 2.92432378e-04, 1.13030383e-03, > 5.85175178e-04, 1.46557658e-03, 1.88808236e-03, > 3.10065225e-03, 3.55264964e-03, 4.15862398e-03, > 7.17980415e-03, 4.59075347e-03, 2.15481054e-02, > 4.47589159e-03, 6.03925623e-03, 9.55558382e-03, > 1.08743245e-02, 4.59385756e-03, 6.76350715e-03, > 1.05188731e-02], > [ 1.71982509e-04, 7.66781232e-05, 2.50212825e-03, > 4.58050938e-03, 3.75257153e-03, 3.77368135e-03, > 3.04756756e-03, 4.28559305e-03, 1.21471193e-02, > 1.85669716e-02, 2.26231515e-02, 2.37028543e-02, > 7.72741623e-03, 1.21803358e-02, 7.17204344e-03, > 1.18307828e-03, 9.51678026e-03, 8.88597034e-03, > 1.38688069e-02], > [ 7.85935298e-03, 1.02338903e-02, 2.00772509e-02, > 1.29666738e-02, 8.32718238e-03, 7.63056008e-03, > 6.14076946e-03, 9.90855228e-03, 1.00889169e-02, > 1.48749352e-02, 1.88460555e-02, 8.26913025e-03, > 9.51088127e-03, 7.60696689e-03, 1.25758338e-03, > 4.51376487e-04, 1.18463056e-03, 2.09514447e-03, > 2.51206243e-03], > [ 2.33238097e-02, 2.73073465e-02, 3.22693810e-02, > 1.88162532e-02, 5.83653990e-03, 3.47224623e-03, > 1.80892076e-03, 9.19796061e-03, 8.73230398e-03, > 1.45570468e-02, 1.55864581e-02, 1.42422626e-02, > 1.35748219e-02, 1.19139794e-02, 3.64764361e-03, > 5.92625700e-04, 2.55024619e-03, 1.44043122e-03, > 1.51959271e-03], > [ 4.65045050e-02, 4.04065773e-02, 2.90905014e-02, > 1.47327557e-02, 6.68620877e-03, 5.12005016e-03, > 4.09746822e-03, 5.41186146e-03, 1.08640799e-02, > 1.43946875e-02, 1.87681355e-02, 2.52960213e-02, > 1.79439243e-02, 1.72994547e-02, 7.33191893e-03, > 4.66867350e-03, 1.00023043e-03, 9.08961752e-04, > 2.14481447e-03], > [ 7.23621100e-02, 4.10851948e-02, 8.70250165e-03, > 5.37367724e-03, 2.11873767e-03, 3.15994583e-03, > 2.98547978e-03, 5.60216000e-03, 5.73099125e-03, > 9.40564275e-03, 1.04456097e-02, 1.35018695e-02, > 2.56229118e-02, 2.75842566e-02, 2.80958600e-02, > 3.02949995e-02, 4.40449081e-03, 1.03189505e-03, > 1.60216924e-03]], dtype=float32) > > Appreciate any help with getting this function to work. > /M For lat/lon data, I suggest taking a look at the new classes for spherical coordinates which got introduced in SciPy 0.11: scipy.interpolate.LSQSphereBivariateSpline, SmoothSphereBivariateSpline, and RectSphereBivariateSpline. A. From n.nikandish at gmail.com Tue Jul 10 22:43:46 2012 From: n.nikandish at gmail.com (Naser Nikandish) Date: Tue, 10 Jul 2012 19:43:46 -0700 (PDT) Subject: [SciPy-User] installing numpy and scipy for preinstalled Python 2.6 on Mac Message-ID: Hello, I am new to the whole numpy and scipy world. I was going to install both but I read mixed things about installation problems compatibility etc. Here is the system I am using: Mac OS Lion 10.7 I am using Python 2.6 that comes preinstalled on Mac. The reason I am not using Python 10.7 is that I need to use Gurobi which at the moments can be used by Python 2.6 on Mac. I was wondering if there is any clear procedure on installing NumPy and SciPy on this system. I really appreciate your help. Btw, I know numpy comes preinstalled on my mac, but it would be nice to install the latest version, if possible. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Jul 11 09:17:28 2012 From: cournape at gmail.com (David Cournapeau) Date: Wed, 11 Jul 2012 14:17:28 +0100 Subject: [SciPy-User] installing numpy and scipy for preinstalled Python 2.6 on Mac In-Reply-To: References: Message-ID: Hi Naser, On Wed, Jul 11, 2012 at 3:43 AM, Naser Nikandish wrote: > Hello, > > I am new to the whole numpy and scipy world. I was going to install both but > I read mixed things about installation problems compatibility etc. Here is > the system I am using: > > Mac OS Lion 10.7 > I am using Python 2.6 that comes preinstalled on Mac. The reason I am not > using Python 10.7 is that I need to use Gurobi which at the moments can be > used by Python 2.6 on Mac. Can you use python 2.6 installed from www.python.org ? If so, I would strongly suggest installing just that, and then use the official binaries from numpy and scipy for python 2.6. Otherwise, you will have to build it by yourself, which is not trivial David From loic at inzenet.org Wed Jul 11 09:05:44 2012 From: loic at inzenet.org (=?utf-8?b?TG/Dr2M=?=) Date: Wed, 11 Jul 2012 13:05:44 +0000 (UTC) Subject: [SciPy-User] odeint MemoryError Message-ID: Hi, I'm trying to run scipy.integrate.odeint on a rather large vector, but obtain a memory error. I do not really understand why I'm getting this error as my vector can fit into memory... Which scipy internal operation could trigger the error? Maybe is there some internal parameter to tune? Here is a small test program which obtain a MemoryError : -- from scipy import * from scipy.integrate import * n = 45000 def rdot( r, t ) : r *= 2 return r r0 = rand(n) odeint(rdot, r0, linspace(0, 10, 2)) -- Thanks, Lo?c From dtlussier at gmail.com Wed Jul 11 11:55:18 2012 From: dtlussier at gmail.com (Dan Lussier) Date: Wed, 11 Jul 2012 10:55:18 -0500 Subject: [SciPy-User] installing numpy and scipy for preinstalled Python 2.6 on Mac In-Reply-To: References: Message-ID: A great way to get Numpy + Scipy as well as a bunch of other very useful packages is via the Enthought Python Distribution: http://www.enthought.com/products/epd.php I have been using it on Mac OS X for years and the installation is via a usual GUI based OS X installer. In addition to the ease of installation - another advantage with this approach is that it installs a completely new Python and leaves the system Python installation alone. On 2012-07-10, at 9:43 PM, Naser Nikandish wrote: > Hello, > > I am new to the whole numpy and scipy world. I was going to install both but I read mixed things about installation problems compatibility etc. Here is the system I am using: > > Mac OS Lion 10.7 > I am using Python 2.6 that comes preinstalled on Mac. The reason I am not using Python 10.7 is that I need to use Gurobi which at the moments can be used by Python 2.6 on Mac. > > I was wondering if there is any clear procedure on installing NumPy and SciPy on this system. I really appreciate your help. Btw, I know numpy comes preinstalled on my mac, but it would be nice to install the latest version, if possible. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Thu Jul 12 06:57:40 2012 From: sturla at molden.no (Sturla Molden) Date: Thu, 12 Jul 2012 12:57:40 +0200 Subject: [SciPy-User] missing lapack functions in scipy.linalg.flapack In-Reply-To: <20120708193614.Horde.x6UCI5tDWw9P_bb_hVh2mmA@webmail.tau.ac.il> References: <20120708193614.Horde.x6UCI5tDWw9P_bb_hVh2mmA@webmail.tau.ac.il> Message-ID: <4FFEADA4.7070801@molden.no> On 08.07.2012 18:36, michf at post.tau.ac.il wrote: > I'm trying to access a few lapack functions that don't seem to be > available directly in scipy as far as I can tell. Specifically I need > at the moment ?trsyl (solve triagonal sylvester equation) and the > various subfunctions for computing eigenvectors (the whole stack used > behind ?geev) > > I'm currently using the 64bit windows distribution from enthought > (academic version). > > Any ideas? The easiest way to get a LAPACK function not in SciPy is to use LAPACKE (the new C interface) from Cython or ctypes. Enthought comes with Intel MKL (at least on Windows) which already has a LAPACKE interface: from ctypes import CDLL, c_char, c_int import numpy as np from numpy.ctypeslib import ndpointer LAPACK_ROW_MAJOR = 101 LAPACK_COL_MAJOR = 102 intel_mkl = CDLL('mk2_rt.dll') LAPACKE_dtrsyl = intel_mkl.LAPACKE_dtrsyl array_t = ndpointer(dtype=np.float64, ndim=2) LAPACKE_dtrsyl.restype = c_int LAPACKE_dtrsyl.argtypes = [c_int, c_char, c_char, c_int, c_int, c_int, _array_t, c_int, _array_t, c_int, _array_t, c_int, c_double] And then you can call the function like this: info = LAPACKE_dtrsyl(LAPACK_ROW_MAJOR, # or LAPACK_COL_MAJOR 'N', 'N', ISGN, M, N, A, LDA, B, LDB, C, LDC, SCALE) Sturla From ncis-cmsp2012 at shiep.edu.cn Thu Jul 12 18:18:58 2012 From: ncis-cmsp2012 at shiep.edu.cn (ncis-cmsp2012) Date: Fri, 13 Jul 2012 06:18:58 +0800 (CST) Subject: [SciPy-User] =?gb18030?q?NCIS=2712-CMSP=2712-CFP-Final_Extension?= =?gb18030?q?=3Aby_31_July_2012?= Message-ID: <542130957.08337@student.shiep.edu.cn> An HTML attachment was scrubbed... URL: From jordens at gmail.com Fri Jul 13 02:08:52 2012 From: jordens at gmail.com (Robert Jordens) Date: Thu, 12 Jul 2012 23:08:52 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] adaptive simulated annealing (ASA) In-Reply-To: <4F391176.6080906@ed.ac.uk> References: <4F391176.6080906@ed.ac.uk> Message-ID: Hello, On Monday, February 13, 2012 6:34:46 AM UTC-7, Dieter Werthm?ller wrote: > > The message 'adaptive simulated annealing (ASA)' is over four years old > now, asking for a python implementation of ASA (http://www.ingber.com/#ASA > ). > > I was wondering if such an implementation is around today? > Now there is: http://pypi.python.org/pypi/pyasa Let me know what you think. Robert. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmp50 at ukr.net Sat Jul 14 07:09:39 2012 From: tmp50 at ukr.net (Dmitrey) Date: Sat, 14 Jul 2012 14:09:39 +0300 Subject: [SciPy-User] [SciPy-user] adaptive simulated annealing (ASA) In-Reply-To: References: <4F391176.6080906@ed.ac.uk> Message-ID: <72440.1342264179.3513226122095689728@ffe17.ukr.net> --- ???????? ????????? --- ?? ????: "Robert Jordens" ????: scipy-user at googlegroups.com ????: 13 ???? 2012, 18:21:43 ????: Re: [SciPy-User] [SciPy-user] adaptive simulated annealing (ASA) > Hello, > > On Monday, February 13, 2012 6:34:46 AM UTC-7, Dieter Werthmller wrote: The message 'adaptive simulated annealing (ASA)' is over four years old > now, asking for a python implementation of ASA (http://www.ingber.com/#ASA). I was wondering if such an implementation is around today? Now there is: http://pypi.python.org/pypi/pyasa > Let me know what you think. > I have connected it to OpenOpt GLP class ( http://openopt.org/GLP ); unfortunately, vectorization seems to be unavailable for pyasa, isn't it? Are any possibilities to provide it? Thus currently de and pswarm work many times and sometimes even orders faster. Regards, D. > > > Robert. _______________________________________________ SciPy-User mailing listSciPy-User at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmp50 at ukr.net Sat Jul 14 07:16:53 2012 From: tmp50 at ukr.net (Dmitrey) Date: Sat, 14 Jul 2012 14:16:53 +0300 Subject: [SciPy-User] New Python tool for searching maximum stable set of a graph Message-ID: <65118.1342264613.18046577829960613888@ffe5.ukr.net> Hi all, In the OpenOpt software (BSD-licensed, http://openopt.org ) we have implemented new class - STAB - searching for maximum stable set of a graph. networkx graphs are used as input arguments. Unlike networkx maximum_independent_set() we focus on searching for exact solution (this is NP-Hard problem). interalg or OpenOpt MILP solvers are used, some GUI features and stop criterion (e.g. maxTime, maxCPUTime, fEnough) can be used. Optional arguments are includedNodes and excludedNodes - nodes that have to be present/absent in solution. See http://openopt.org/STAB for details. Future plans (probably very long-term although) include TSP and some other graph problems. ------------------------- Regards, Dmitrey. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmp50 at ukr.net Mon Jul 16 13:35:09 2012 From: tmp50 at ukr.net (Dmitrey) Date: Mon, 16 Jul 2012 20:35:09 +0300 Subject: [SciPy-User] routine for linear least norms problems with specifiable accuracy Message-ID: <79633.1342460109.14890437708640157696@ffe16.ukr.net> hi all, I have wrote a routine to solve dense / sparse problems min {alpha1*||A1 x - b1||_1 + alpha2*||A2 x - b2||^2 + beta1 * ||x||_1 + beta2 * ||x||^2} with specifiable accuracy fTol > 0: abs(f-f*) <= fTol (this parameter is handled by solvers gsubg and maybe amsg2p, latter requires known good enough fOpt estimation). Constraints (box-bound, linear, quadratic) also could be easily connected. This problem is very often encountered in many areas, e.g. machine learning, sparse approximation, see for example http://scikit-learn.org/stable/modules/ ? lastic-net First of all solver large-scale gsubg is recommended. Some hand-tuning of its parameters also could essentially speedup the solver. Also you could be interested in other OpenOpt NSP solvers - ralg and amsg2p (they are medium-scaled although). You can see the source of the routine and its demo result here. You shouldn't expect gsubg will always solve your problem and inform of obtained result with specifiable accuracy - for some very difficult, e.g. extremely ill-conditioned problems it may * fail to solve QP subproblem (default QP solver is cvxopt, you may involve another one, e.g. commercial or free-for-educational cplex) * exit with another stop criterion, e.g. maxIter has been reached, or maxShoots have been exceeded (usually latter means you have reached solution, but it cannot be guaranteed in the case) First of all I have created the routine to demonstrate gsubg abilities; I haven't decided yet commit or not commit the routine to OpenOpt, with or without special class for this problem; in either case you can very easily create problems like this one in FuncDesigner (without having to write a routine for derivatives) to solve them by gsubg or another NSP solver; however, IIRC FuncDesigner dot() doesn't work with sparse matrices yet -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmp50 at ukr.net Mon Jul 16 15:36:42 2012 From: tmp50 at ukr.net (Dmitrey) Date: Mon, 16 Jul 2012 22:36:42 +0300 Subject: [SciPy-User] [Numpy-discussion] routine for linear least norms problems with specifiable accuracy In-Reply-To: <1342464458.11238.0.camel@farnsworth> References: <1342464458.11238.0.camel@farnsworth> <79633.1342460109.14890437708640157696@ffe16.ukr.net> Message-ID: <11828.1342467402.1956568605669523456@ffe12.ukr.net> gsubg uses N.Zhurbenko ( http://openopt.org/NikolayZhurbenko ) epsilon-subgradient method ralg and amsg2p use other algorithms --- ???????? ????????? --- ?? ????: "Henry Gomersall" ????: "Discussion of Numerical Python" ????: 16 ???? 2012, 21:47:47 ????: Re: [Numpy-discussion] routine for linear least norms problems with specifiable accuracy > On Mon, 2012-07-16 at 20:35 +0300, Dmitrey wrote: > I have wrote a routine to solve dense / sparse problems > min {alpha1*||A1 x - b1||_1 + alpha2*||A2 x - b2||^2 + beta1 * ||x||_1 > + beta2 * ||x||^2} > with specifiable accuracy fTol > 0: abs(f-f*) <= fTol (this parameter > is handled by solvers gsubg and maybe amsg2p, latter requires known > good enough fOpt estimation). Constraints (box-bound, linear, > quadratic) also could be easily connected. > Interesting. What algorithm are you using? Henry _______________________________________________ NumPy-Discussion mailing listNumPy-Discussion at scipy.orghttp://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From heng at cantab.net Mon Jul 16 14:47:38 2012 From: heng at cantab.net (Henry Gomersall) Date: Mon, 16 Jul 2012 19:47:38 +0100 Subject: [SciPy-User] [Numpy-discussion] routine for linear least norms problems with specifiable accuracy In-Reply-To: <79633.1342460109.14890437708640157696@ffe16.ukr.net> References: <79633.1342460109.14890437708640157696@ffe16.ukr.net> Message-ID: <1342464458.11238.0.camel@farnsworth> On Mon, 2012-07-16 at 20:35 +0300, Dmitrey wrote: > I have wrote a routine to solve dense / sparse problems > min {alpha1*||A1 x - b1||_1 + alpha2*||A2 x - b2||^2 + beta1 * ||x||_1 > + beta2 * ||x||^2} > with specifiable accuracy fTol > 0: abs(f-f*) <= fTol (this parameter > is handled by solvers gsubg and maybe amsg2p, latter requires known > good enough fOpt estimation). Constraints (box-bound, linear, > quadratic) also could be easily connected. > Interesting. What algorithm are you using? Henry From stkamdar at gmail.com Mon Jul 16 20:31:13 2012 From: stkamdar at gmail.com (Sagar Kamdar) Date: Mon, 16 Jul 2012 17:31:13 -0700 Subject: [SciPy-User] unable to get scipy working on my Mac with Lion Message-ID: Hey, I tried installing scipy a few times on my mac following different instructions. The most recent one being here: http://willmore.eu/blog/?p=5 I keep running into this error. Any ideas on what may be going on? >>> import scipy >>> scipy.test() Running unit tests for scipy NumPy version 1.8.0.dev-1234d1c NumPy is installed in /Library/Python/2.7/site-packages/numpy-1.8.0.dev_1234d1c-py2.7-macosx-10.7-intel.egg/numpy SciPy version 0.12.0.dev-52a740c SciPy is installed in /Library/Python/2.7/site-packages/scipy-0.12.0.dev_52a740c-py2.7-macosx-10.7-intel.egg/scipy Python version 2.7.1 (r271:86832, Jul 31 2011, 19:30:53) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] nose version 1.1.2 .....................................................................................................................................................................................F.FFpython(8816,0x7fff75937960) malloc: *** error for object 0x7f9ba447b120: pointer being freed was not allocated *** set a breakpoint in malloc_error_break to debug Abort trap: 6 Thanks for all your help! --Sagar -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.greneche at univ-paris13.fr Tue Jul 17 08:16:35 2012 From: nicolas.greneche at univ-paris13.fr (Nicolas Greneche) Date: Tue, 17 Jul 2012 14:16:35 +0200 Subject: [SciPy-User] mkl_lapack32 ? Message-ID: <500557A3.8030300@univ-paris13.fr> Hi, I have some trouble to build scipy on my computer. Here is my site.cfg (at top level of scipy directory) : [mkl] library_dirs = /opt/intel/composerxe-2011.0.084/mkl/lib/intel64 include_dirs = /opt/intel/composerxe-2011.0.084/mkl/include mkl_libs = mkl_intel_lp64,mkl_intel_thread,mkl_core cc_exe = 'icc -O2 -g -openmp -avx' I run config / build / install : python setup.py config --compiler=intelem --fcompiler=intelem build_clib --compiler=intelem --fcompiler=intelem build_ext --compiler=intelem --fcompiler=intelem install --prefix=/usr/local icc: build/src.linux-x86_64-2.6/fortranobject.c creating build/lib.linux-x86_64-2.6/scipy/lib/lapack icc -m64 -fPIC -shared build/temp.linux-x86_64-2.6/build/src.linux-x86_64-2.6/build/src.linux-x86_64-2.6/scipy/lib/lapack/flapackmodule.o build/temp.linux-x86_64-2.6/build/src.linux-x86_64-2.6/fortranobject.o -L/opt/intel/composerxe-2011.0.084/mkl/lib/intel64 -L/usr/lib64 -Lbuild/temp.linux-x86_64-2.6 -lmkl_lapack32 -lmkl_lapack64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lpthread -lpython2.6 -o build/lib.linux-x86_64-2.6/scipy/lib/lapack/flapack.so ld: cannot find -lmkl_lapack32 ld: cannot find -lmkl_lapack32 error: Command "icc -m64 -fPIC -shared build/temp.linux-x86_64-2.6/build/src.linux-x86_64-2.6/build/src.linux-x86_64-2.6/scipy/lib/lapack/flapackmodule.o build/temp.linux-x86_64-2.6/build/src.linux-x86_64-2.6/fortranobject.o -L/opt/intel/composerxe-2011.0.084/mkl/lib/intel64 -L/usr/lib64 -Lbuild/temp.linux-x86_64-2.6 -lmkl_lapack32 -lmkl_lapack64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lpthread -lpython2.6 -o build/lib.linux-x86_64-2.6/scipy/lib/lapack/flapack.so" failed with exit status 1 I don't understand why it attempts to link with mkl_lapack32 and mkl_lapack64 whereas it runs on a 64 bits system. Anyone can help ? Regards, -- Nicolas Gren?che Centre de Ressources Informatiques Universit? Paris NORD / UP13 99, avenue Jean-Baptiste Cl?ment 93430 Villetaneuse Tel : 01 49 40 40 35 Fax : 01 48 22 81 50 From ralf.gommers at googlemail.com Tue Jul 17 12:41:18 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 17 Jul 2012 18:41:18 +0200 Subject: [SciPy-User] unable to get scipy working on my Mac with Lion In-Reply-To: References: Message-ID: On Tue, Jul 17, 2012 at 2:31 AM, Sagar Kamdar wrote: > Hey, > > I tried installing scipy a few times on my mac following different > instructions. The most recent one being here: > http://willmore.eu/blog/?p=5 > > I keep running into this error. Any ideas on what may be going on? > You can run the tests as "scipy.test(verbose=2)" to see what's crashing. I'd recommend to use a binary installer (either EPD or binaries from sourceforge) to avoid problems. If you want to install from source, do *not* use llvm-gcc. For recent XCode I think normal gcc is not available (it probably still was in the description you linked to above), therefore use clang. Ralf > >>> import scipy > >>> scipy.test() > Running unit tests for scipy > NumPy version 1.8.0.dev-1234d1c > NumPy is installed in > /Library/Python/2.7/site-packages/numpy-1.8.0.dev_1234d1c-py2.7-macosx-10.7-intel.egg/numpy > SciPy version 0.12.0.dev-52a740c > SciPy is installed in > /Library/Python/2.7/site-packages/scipy-0.12.0.dev_52a740c-py2.7-macosx-10.7-intel.egg/scipy > Python version 2.7.1 (r271:86832, Jul 31 2011, 19:30:53) [GCC 4.2.1 (Based > on Apple Inc. build 5658) (LLVM build 2335.15.00)] > nose version 1.1.2 > .....................................................................................................................................................................................F.FFpython(8816,0x7fff75937960) > malloc: *** error for object 0x7f9ba447b120: pointer being freed was not > allocated > *** set a breakpoint in malloc_error_break to debug > Abort trap: 6 > > Thanks for all your help! > > --Sagar > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From celephicus at gmail.com Wed Jul 18 21:39:59 2012 From: celephicus at gmail.com (Tom Harris) Date: Thu, 19 Jul 2012 11:39:59 +1000 Subject: [SciPy-User] Linear Interpolation Message-ID: Greetings. I would like some help in linear interpolation. Not in doing the interpolation, but in choosing the points for the interpolation (are these called interpolants?). I have experimental data that gives y for a few values of x, and while I can get good approximations using scipy's spline functions, I need good linear interpolation points so that I can perform an approximation in code that is run on a tiny embedded system that can only do integer arithmetic. I have been using a fairly inadequate method of choosing my linear interpolation points: simply putting points on the splined curve and adding points at the point of greatest error until the error reduces to a low enough value. But there has got to be a better way. Do people simply not bother with linear interpolation and go straight to splines? -- Tom Harris -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Wed Jul 18 21:59:46 2012 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Wed, 18 Jul 2012 18:59:46 -0700 (PDT) Subject: [SciPy-User] Linear Interpolation In-Reply-To: References: Message-ID: <1342663186.68458.YahooMailNeo@web113401.mail.gq1.yahoo.com> how about linear splines - ie using splrep with k=1? ________________________________ From: Tom Harris To: scipy-user at scipy.org Sent: Thursday, 19 July 2012 1:39 PM Subject: [SciPy-User] Linear Interpolation Greetings. I would like some help in linear interpolation. Not in doing the interpolation, but in choosing the points for the interpolation (are these called interpolants?). I have experimental data that gives y for a few values of x, and while I can get good approximations using scipy's spline functions, I need good linear interpolation points so that I can perform an approximation in code that is run on a tiny embedded system that can only do integer arithmetic. I have been using a fairly inadequate method of choosing my linear interpolation points: simply putting points on the splined curve and adding points at the point of greatest error until the error reduces to a low enough value. But there has got to be a better way. Do people simply not bother with linear interpolation and go straight to splines? -- Tom Harris _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From silva at lma.cnrs-mrs.fr Thu Jul 19 11:53:02 2012 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Thu, 19 Jul 2012 17:53:02 +0200 Subject: [SciPy-User] ODE integration : pre allocation In-Reply-To: <1341400035.4164.11.camel@amilo.coursju> References: <1341400035.4164.11.camel@amilo.coursju> Message-ID: <1342713182.13611.25.camel@amilo.coursju> Le mercredi 04 juillet 2012 ? 13:07 +0200, Fabrice Silva a ?crit : > Hi folks, > I was wondering if the ode integrators in scipy.integrate.ode handle the > pre-allocation of the result of the rhs (i.e. y'=f(y,t) ). > A creation/allocation of the result array may lead to a strong > performance penalty, am I wrong ? > > Is there a way to provide a reference to a pre-allocated array that > would only have to be filled ? For example, the callable signature would > be : > f(t, y, y', *f_args) > where t, y and f_args are inputs, and y' is the output. When calling the > callable, y' would be a reference to the result array to be filled. > > Any thought? This would be possible for *vode solvers by changing in vode.pyf (L12) double precision dimension(n),intent(out,c) :: ydot by double precision dimension(n),intent(in,out,c) :: ydot Same in dop.pyf double precision dimension(n),intent(out,c) :: f by double precision dimension(n),intent(in,out,c) :: f removed the need to allocate an array for the ydot result. From ralf.gommers at googlemail.com Thu Jul 19 14:10:28 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 19 Jul 2012 20:10:28 +0200 Subject: [SciPy-User] ANN: SciPy 0.11.0 release candidate 1 Message-ID: Hi, I am pleased to announce the availability of the first release candidate of SciPy 0.11.0. For this release many new features have been added, and over 120 tickets and pull requests have been closed. Also noteworthy is that the number of contributors for this release has risen to over 50. Some of the highlights are: - A new module, sparse.csgraph, has been added which provides a number of common sparse graph algorithms. - New unified interfaces to the existing optimization and root finding functions have been added. Sources and binaries can be found at https://sourceforge.net/projects/scipy/files/scipy/0.11.0rc1/, release notes are copied below. Please try this release candidate and report any problems on the scipy mailing lists. Cheers, Ralf ========================== SciPy 0.11.0 Release Notes ========================== .. note:: Scipy 0.11.0 is not released yet! .. contents:: SciPy 0.11.0 is the culmination of 8 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. Highlights of this release are: - A new module has been added which provides a number of common sparse graph algorithms. - New unified interfaces to the existing optimization and root finding functions have been added. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Our development attention will now shift to bug-fix releases on the 0.11.x branch, and on adding new features on the master branch. This release requires Python 2.4-2.7 or 3.1-3.2 and NumPy 1.5.1 or greater. New features ============ Sparse Graph Submodule ---------------------- The new submodule :mod:`scipy.sparse.csgraph` implements a number of efficient graph algorithms for graphs stored as sparse adjacency matrices. Available routines are: - :func:`connected_components` - determine connected components of a graph - :func:`laplacian` - compute the laplacian of a graph - :func:`shortest_path` - compute the shortest path between points on a positive graph - :func:`dijkstra` - use Dijkstra's algorithm for shortest path - :func:`floyd_warshall` - use the Floyd-Warshall algorithm for shortest path - :func:`breadth_first_order` - compute a breadth-first order of nodes - :func:`depth_first_order` - compute a depth-first order of nodes - :func:`breadth_first_tree` - construct the breadth-first tree from a given node - :func:`depth_first_tree` - construct a depth-first tree from a given node - :func:`minimum_spanning_tree` - construct the minimum spanning tree of a graph ``scipy.optimize`` improvements ------------------------------- The optimize module has received a lot of attention this release. In addition to added tests, documentation improvements, bug fixes and code clean-up, the following improvements were made: - A unified interface to minimizers of univariate and multivariate functions has been added. - A unified interface to root finding algorithms for multivariate functions has been added. - The L-BFGS-B algorithm has been updated to version 3.0. Unified interfaces to minimizers ```````````````````````````````` Two new functions ``scipy.optimize.minimize`` and ``scipy.optimize.minimize_scalar`` were added to provide a common interface to minimizers of multivariate and univariate functions respectively. For multivariate functions, ``scipy.optimize.minimize`` provides an interface to methods for unconstrained optimization (`fmin`, `fmin_powell`, `fmin_cg`, `fmin_ncg`, `fmin_bfgs` and `anneal`) or constrained optimization (`fmin_l_bfgs_b`, `fmin_tnc`, `fmin_cobyla` and `fmin_slsqp`). For univariate functions, ``scipy.optimize.minimize_scalar`` provides an interface to methods for unconstrained and bounded optimization (`brent`, `golden`, `fminbound`). This allows for easier comparing and switching between solvers. Unified interface to root finding algorithms ```````````````````````````````````````````` The new function ``scipy.optimize.root`` provides a common interface to root finding algorithms for multivariate functions, embeding `fsolve`, `leastsq` and `nonlin` solvers. ``scipy.linalg`` improvements ----------------------------- New matrix equation solvers ``````````````````````````` Solvers for the Sylvester equation (``scipy.linalg.solve_sylvester``, discrete and continuous Lyapunov equations (``scipy.linalg.solve_lyapunov``, ``scipy.linalg.solve_discrete_lyapunov``) and discrete and continuous algebraic Riccati equations (``scipy.linalg.solve_continuous_are``, ``scipy.linalg.solve_discrete_are``) have been added to ``scipy.linalg``. These solvers are often used in the field of linear control theory. QZ and QR Decomposition ```````````````````````` It is now possible to calculate the QZ, or Generalized Schur, decomposition using ``scipy.linalg.qz``. This function wraps the LAPACK routines sgges, dgges, cgges, and zgges. The function ``scipy.linalg.qr_multiply``, which allows efficient computation of the matrix product of Q (from a QR decompostion) and a vector, has been added. Pascal matrices ``````````````` A function for creating Pascal matrices, ``scipy.linalg.pascal``, was added. Sparse matrix construction and operations ----------------------------------------- Two new functions, ``scipy.sparse.diags`` and ``scipy.sparse.block_diag``, were added to easily construct diagonal and block-diagonal sparse matrices respectively. ``scipy.sparse.csc_matrix`` and ``csr_matrix`` now support the operations ``sin``, ``tan``, ``arcsin``, ``arctan``, ``sinh``, ``tanh``, ``arcsinh``, ``arctanh``, ``rint``, ``sign``, ``expm1``, ``log1p``, ``deg2rad``, ``rad2deg``, ``floor``, ``ceil`` and ``trunc``. Previously, these operations had to be performed by operating on the matrices' ``data`` attribute. LSMR iterative solver --------------------- LSMR, an iterative method for solving (sparse) linear and linear least-squares systems, was added as ``scipy.sparse.linalg.lsmr``. Discrete Sine Transform ----------------------- Bindings for the discrete sine transform functions have been added to ``scipy.fftpack``. ``scipy.interpolate`` improvements ---------------------------------- For interpolation in spherical coordinates, the three classes ``scipy.interpolate.SmoothSphereBivariateSpline``, ``scipy.interpolate.LSQSphereBivariateSpline``, and ``scipy.interpolate.RectSphereBivariateSpline`` have been added. Binned statistics (``scipy.stats``) ----------------------------------- The stats module has gained functions to do binned statistics, which are a generalization of histograms, in 1-D, 2-D and multiple dimensions: ``scipy.stats.binned_statistic``, ``scipy.stats.binned_statistic_2d`` and ``scipy.stats.binned_statistic_dd``. Deprecated features =================== ``scipy.sparse.cs_graph_components`` has been made a part of the sparse graph submodule, and renamed to ``scipy.sparse.csgraph.connected_components``. Calling the former routine will result in a deprecation warning. ``scipy.misc.radon`` has been deprecated. A more full-featured radon transform can be found in scikits-image. ``scipy.io.save_as_module`` has been deprecated. A better way to save multiple Numpy arrays is the ``numpy.savez`` function. The `xa` and `xb` parameters for all distributions in ``scipy.stats.distributions`` already weren't used; they have now been deprecated. Backwards incompatible changes ============================== Removal of ``scipy.maxentropy`` ------------------------------- The ``scipy.maxentropy`` module, which was deprecated in the 0.10.0 release, has been removed. Logistic regression in scikits.learn is a good and modern alternative for this functionality. Minor change in behavior of ``splev`` ------------------------------------- The spline evaluation function now behaves similarly to ``interp1d`` for size-1 arrays. Previous behavior:: >>> from scipy.interpolate import splev, splrep, interp1d >>> x = [1,2,3,4,5] >>> y = [4,5,6,7,8] >>> tck = splrep(x, y) >>> splev([1], tck) 4. >>> splev(1, tck) 4. Corrected behavior:: >>> splev([1], tck) array([ 4.]) >>> splev(1, tck) array(4.) This affects also the ``UnivariateSpline`` classes. Behavior of ``scipy.integrate.complex_ode`` ------------------------------------------- The behavior of the ``y`` attribute of ``complex_ode`` is changed. Previously, it expressed the complex-valued solution in the form:: z = ode.y[::2] + 1j * ode.y[1::2] Now, it is directly the complex-valued solution:: z = ode.y Minor change in behavior of T-tests ----------------------------------- The T-tests ``scipy.stats.ttest_ind``, ``scipy.stats.ttest_rel`` and ``scipy.stats.ttest_1samp`` have been changed so that 0 / 0 now returns NaN instead of 1. Other changes ============= The SuperLU sources in ``scipy.sparse.linalg`` have been updated to version 4.3 from upstream. The function ``scipy.signal.bode``, which calculates magnitude and phase data for a continuous-time system, has been added. The two-sample T-test ``scipy.stats.ttest_ind`` gained an option to compare samples with unequal variances, i.e. Welch's T-test. ``scipy.misc.logsumexp`` now takes an optional ``axis`` keyword argument. Authors ======= This release contains work by the following people (contributed at least one patch to this release, names in alphabetical order): * Jeff Armstrong * Chad Baker * Brandon Beacher + * behrisch + * borishim + * Matthew Brett * Lars Buitinck * Luis Pedro Coelho + * Johann Cohen-Tanugi * David Cournapeau * dougal + * Ali Ebrahim + * endolith + * Bj?rn Forsman + * Robert Gantner + * Sebastian Gassner + * Christoph Gohlke * Ralf Gommers * Yaroslav Halchenko * Charles Harris * Jonathan Helmus + * Andreas Hilboll + * Marc Honnorat + * Jonathan Hunt + * Maxim Ivanov + * Thouis (Ray) Jones * Christopher Kuster + * Josh Lawrence + * Denis Laxalde + * Travis Oliphant * Joonas Paalasmaa + * Fabian Pedregosa * Josef Perktold * Gavin Price + * Jim Radford + * Andrew Schein + * Skipper Seabold * Jacob Silterra + * Scott Sinclair * Alexis Tabary + * Martin Teichmann * Matt Terry + * Nicky van Foreest + * Jacob Vanderplas * Patrick Varilly + * Pauli Virtanen * Nils Wagner + * Darryl Wally + * Stefan van der Walt * Liming Wang + * David Warde-Farley + * Warren Weckesser * Sebastian Werk + * Mike Wimmer + * Tony S Yu + A total of 55 people contributed to this release. People with a "+" by their names contributed a patch for the first time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at hilboll.de Fri Jul 20 05:34:01 2012 From: lists at hilboll.de (Andreas Hilboll) Date: Fri, 20 Jul 2012 11:34:01 +0200 Subject: [SciPy-User] [SciPy-Dev] ANN: SciPy 0.11.0 release candidate 1 In-Reply-To: References: Message-ID: <3fb4522421d3b87335b5d040a0e6cd4b.squirrel@srv2.s4y.tournesol-consulting.eu> > Hi, > > I am pleased to announce the availability of the first release candidate > of > SciPy 0.11.0. For this release many new features have been added, and over > 120 tickets and pull requests have been closed. Also noteworthy is that > the > number of contributors for this release has risen to over 50. Some of the > highlights are: > > - A new module, sparse.csgraph, has been added which provides a number > of > common sparse graph algorithms. > - New unified interfaces to the existing optimization and root finding > functions have been added. > > Sources and binaries can be found at > https://sourceforge.net/projects/scipy/files/scipy/0.11.0rc1/, release > notes are copied below. > > Please try this release candidate and report any problems on the scipy > mailing lists. Failure on Archlinux 64bit, Python 2.7.3, Numpy 1.6.1: This is what I did: mkvirtualenv --system-site-packages --distribute scipy_test_rc1 cd ~/.virtualenvs/scipy_test_rc1 mkdir src wget -O src/scipy-0.11.0rc1.tar.gz "http://downloads.sourceforge.net/project/scipy/scipy/0.11.0rc1/scipy-0.11.0rc1.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fscipy%2Ffiles%2Fscipy%2F0.11.0rc1%2F&ts=1342772081&use_mirror=netcologne" cd src tar xzf scipy-0.11.0rc1.tar.gz cd scipy-0.11.0rc1 python setup.py build python setup.py install cd python -c "import scipy; scipy.test('full')" Running unit tests for scipy NumPy version 1.6.1 NumPy is installed in /usr/lib/python2.7/site-packages/numpy SciPy version 0.11.0rc1 SciPy is installed in /home2/hilboll/.virtualenvs/scipy_test_rc1/lib/python2.7/site-packages/scipy Python version 2.7.3 (default, Apr 24 2012, 00:00:54) [GCC 4.7.0 20120414 (prerelease)] nose version 1.1.2 [...] ====================================================================== FAIL: test_basic.TestNorm.test_stable ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home2/hilboll/.virtualenvs/scipy_test_rc1/lib/python2.7/site-packages/scipy/linalg/tests/test_basic.py", line 585, in test_stable assert_almost_equal(norm(a) - 1e4, 0.5) File "/usr/lib/python2.7/site-packages/numpy/testing/utils.py", line 468, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal to 7 decimals ACTUAL: 0.0 DESIRED: 0.5 ---------------------------------------------------------------------- FAILED (KNOWNFAIL=16, SKIP=42, failures=1) From denis-bz-gg at t-online.de Fri Jul 20 06:14:30 2012 From: denis-bz-gg at t-online.de (denis) Date: Fri, 20 Jul 2012 12:14:30 +0200 Subject: [SciPy-User] efficiency of the simplex routine: R (optim) vs scipy.optimize.fmin In-Reply-To: References: Message-ID: Hi Mathieu, (months later) two differences among implementations of Nelder-Mead: 1) the start simplex: x0 +- what ? It's common to take x0 + a fixed (user-specified) stepsize in each dimension. NLOpt takes a "walking simplex", don't know what R does 2) termination: what ftol, xtol did you specify ? NLOpt looks at fhi - flo: fhi changes at each iteration, flo is sticky. Could you post a testcase similar to yours ? That would sure be helpful. cheers -- denis On 24/05/2012 10:15, servant mathieu wrote: > Dear scipy users, > Again a question about optimization. > I've just compared the efficiency of the simplex routine in R > (optim) vs scipy (fmin), when minimizing a chi-square. fmin is faster > than optim, but appears to be less efficient. In R, the value of the > function is always minimized step by step (there are of course some > exceptions) while there is lot of fluctuations in python. Given that the > underlying simplex algorithm is supposed to be the same, which mechanism > is responsible for this difference? Is it possible to constrain fmin so > it could be more rigorous? > Cheers, > Mathieu From scott.sinclair.za at gmail.com Fri Jul 20 06:34:35 2012 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Fri, 20 Jul 2012 12:34:35 +0200 Subject: [SciPy-User] [Numpy-discussion] [SciPy-Dev] ANN: SciPy 0.11.0 release candidate 1 In-Reply-To: <50092F08.3080605@it.uu.se> References: <3fb4522421d3b87335b5d040a0e6cd4b.squirrel@srv2.s4y.tournesol-consulting.eu> <50092F08.3080605@it.uu.se> Message-ID: On 20 July 2012 12:12, Virgil Stokes wrote: >>> I am pleased to announce the availability of the first release candidate >>> of >>> SciPy 0.11.0. For this release many new features have been added, and over > But when I use > > > pip install --upgrade SciPy > > Installation of vers. 0.10.1 is attempted. This is because the latest stable release on PyPI is 0.10.1 (http://pypi.python.org/pypi/scipy/). The release candidates aren't uploaded to the package index, but are located at https://sourceforge.net/projects/scipy/files/scipy/0.11.0rc1/ Cheers, Scott From scott.sinclair.za at gmail.com Fri Jul 20 07:51:42 2012 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Fri, 20 Jul 2012 13:51:42 +0200 Subject: [SciPy-User] [Numpy-discussion] [SciPy-Dev] ANN: SciPy 0.11.0 release candidate 1 In-Reply-To: <500935A0.9040104@it.uu.se> References: <3fb4522421d3b87335b5d040a0e6cd4b.squirrel@srv2.s4y.tournesol-consulting.eu> <50092F08.3080605@it.uu.se> <500935A0.9040104@it.uu.se> Message-ID: On 20 July 2012 12:40, Virgil Stokes wrote: > Thanks Scott, > > This seemed to have worked, although there was an error message during the > installation of scipy-0.11.0rc1-win32-superpack-python2.6.exe. I don't know much about Windows, but don't think that the installer removes your previous Scipy install first. You may want to try removing Scipy first using Add/Remove programs then run the installer again. If you still have problems try posting the full error message. It's hard for anyone to guess what's wrong if you don't :) > How can I determine if I indeed have installed SciPy 0.11.0rc1 successfully? >>> import scipy >>> scipy.__version__ >>> print scipy.__version__ 0.11.0rc1 Please try keep replies on-list. Cheers, Scott From DParker at chromalloy.com Fri Jul 20 09:08:56 2012 From: DParker at chromalloy.com (DParker at chromalloy.com) Date: Fri, 20 Jul 2012 09:08:56 -0400 Subject: [SciPy-User] KDTree IndexError Message-ID: I'm using scipy.spatial.KDTree (version 0.10.1) to perform nearest neighbor interpolation. Recently I've encountered an error when defining a KDTree from certain sets of data. I have not been able to determine what is unique about these data sets and the traceback doesn't provide much of a clue to me. I was hoping someone might be familiar with the problem and provide some insight into what is causing the failure, what to look for in my data set, and how to avoid the failure. The code which generates the traceback is: tree = KDTree(zip(x2,y2,z2)) where x2, y2, and z2 are numpy arrays dtype float64. For one particular data set which fails these have a length of 157,237, shape is (157237,). As I said above I don't know what is unusual about the data that causes the failure to occur. The traceback is copied below: Traceback (most recent call last): File "", line 1, in File "", line 75, in interactivemode File "C:\Python26\lib\site-packages\fluenttools\profile.py", line 740, in commonnodes tree = KDTree(zip(x2,y2,z2)) File "C:\Python26\lib\site-packages\scipy\spatial\kdtree.py", line 174, in __init__ self.tree = self.__build(np.arange(self.n), self.maxes, self.mins) File "C:\Python26\lib\site-packages\scipy\spatial\kdtree.py", line 238, in __build self.__build(idx[less_idx],lessmaxes,mins), {Prior two lines repeated 974 times} File "C:\Python26\lib\site-packages\scipy\spatial\kdtree.py", line 201, in __build data = self.data[idx] IndexError: index must be either an int or a sequence David G. Parker -------------- next part -------------- An HTML attachment was scrubbed... URL: From hasslerjc at comcast.net Fri Jul 20 09:47:13 2012 From: hasslerjc at comcast.net (John Hassler) Date: Fri, 20 Jul 2012 09:47:13 -0400 Subject: [SciPy-User] ANN: SciPy 0.11.0 release candidate 1 In-Reply-To: References: Message-ID: <50096161.8060207@comcast.net> Crashes with "unhandled exception in python.exe" in Python 3.2. The computer is: Intel Core2 Quad Q6600 with 3 GB memory Windows XP Service Pack 3. I removed and re-installed Scipy, but there was no change in the result. Are there other things I could check that might be helpful? john >>> import scipy >>> scipy.test(verbose=10) Running unit tests for scipy NumPy version 1.6.2 NumPy is installed in C:\Python32\lib\site-packages\numpy SciPy version 0.11.0rc1 SciPy is installed in C:\Python32\lib\site-packages\scipy Python version 3.2.3 (default, Apr 11 2012, 07:15:24) [MSC v.1500 32 bit (Intel)] nose version 1.0.0 nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', 'gen_ext', 'pyrex_ext', 'swig_ext'] Tests cophenet(Z) on tdist data set. ... ok Tests cophenet(Z, Y) on tdist data set. ... ok ......... test_qz_complex64 (test_decomp.TestQZ) ... ok test_qz_complex_sort (test_decomp.TestQZ) ... ok test_qz_double (test_decomp.TestQZ) ... ok test_qz_double_complex (test_decomp.TestQZ) ... ok test_qz_double_sort (test_decomp.TestQZ) ... >>> ================================ RESTART ================================ >>> From cgohlke at uci.edu Fri Jul 20 11:34:25 2012 From: cgohlke at uci.edu (Christoph Gohlke) Date: Fri, 20 Jul 2012 08:34:25 -0700 Subject: [SciPy-User] ANN: SciPy 0.11.0 release candidate 1 In-Reply-To: <50096161.8060207@comcast.net> References: <50096161.8060207@comcast.net> Message-ID: <50097A81.3060008@uci.edu> On 7/20/2012 6:47 AM, John Hassler wrote: > Crashes with "unhandled exception in python.exe" in Python 3.2. The > computer is: > Intel Core2 Quad Q6600 with 3 GB memory > Windows XP Service Pack 3. > I removed and re-installed Scipy, but there was no change in the result. > > Are there other things I could check that might be helpful? > john > > >>> import scipy > >>> scipy.test(verbose=10) > Running unit tests for scipy > NumPy version 1.6.2 > NumPy is installed in C:\Python32\lib\site-packages\numpy > SciPy version 0.11.0rc1 > SciPy is installed in C:\Python32\lib\site-packages\scipy > Python version 3.2.3 (default, Apr 11 2012, 07:15:24) [MSC v.1500 32 bit > (Intel)] > nose version 1.0.0 > nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', > 'gen_ext', 'pyrex_ext', 'swig_ext'] > Tests cophenet(Z) on tdist data set. ... ok > Tests cophenet(Z, Y) on tdist data set. ... ok > > ......... > > test_qz_complex64 (test_decomp.TestQZ) ... ok > test_qz_complex_sort (test_decomp.TestQZ) ... ok > test_qz_double (test_decomp.TestQZ) ... ok > test_qz_double_complex (test_decomp.TestQZ) ... ok > test_qz_double_sort (test_decomp.TestQZ) ... > >>> ================================ RESTART > ================================ > >>> > I can reproduce this with the official numpy/scipy-rc binaries, but not the msvc9/MKL ones. The faulthandler traceback is: File "X:\Python32-x32\lib\site-packages\scipy\linalg\_decomp_qz.py", line 170 in qz File "X:\Python32-x32\lib\site-packages\scipy\linalg\tests\test_decomp.py", line 1705 in test_qz_double_sort Line 170 of linalg\_decomp_qz.py: result = gges(sfunction, a1, b1, lwork=lwork, overwrite_a=overwrite_a, overwrite_b=overwrite_b, sort_t=sort_t) Hope it helps. Christoph From guziy.sasha at gmail.com Fri Jul 20 13:02:33 2012 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Fri, 20 Jul 2012 13:02:33 -0400 Subject: [SciPy-User] KDTree IndexError In-Reply-To: References: Message-ID: Hi, is it possible to have x2,y2,z2 to test it? Thanks -- Oleksandr Huziy 2012/7/20 > I'm using scipy.spatial.KDTree (version 0.10.1) to perform nearest > neighbor interpolation. Recently I've encountered an error when defining a > KDTree from certain sets of data. I have not been able to determine what is > unique about these data sets and the traceback doesn't provide much of a > clue to me. I was hoping someone might be familiar with the problem and > provide some insight into what is causing the failure, what to look for in > my data set, and how to avoid the failure. > > The code which generates the traceback is: > tree = KDTree(zip(x2,y2,z2)) > > where x2, y2, and z2 are numpy arrays dtype float64. For one particular > data set which fails these have a length of 157,237, shape is (157237,). > > > As I said above I don't know what is unusual about the data that causes > the failure to occur. The traceback is copied below: > > Traceback (most recent call last): > File "", line 1, in > File "", line 75, in interactivemode > File "C:\Python26\lib\site-packages\fluenttools\profile.py", line 740, > in commonnodes > tree = KDTree(zip(x2,y2,z2)) > File "C:\Python26\lib\site-packages\scipy\spatial\kdtree.py", line 174, > in __init__ > self.tree = self.__build(np.arange(self.n), self.maxes, self.mins) > File "C:\Python26\lib\site-packages\scipy\spatial\kdtree.py", line 238, > in __build > self.__build(idx[less_idx],lessmaxes,mins), > > {Prior two lines repeated 974 times} > > File "C:\Python26\lib\site-packages\scipy\spatial\kdtree.py", line 201, > in __build > data = self.data[idx] > IndexError: index must be either an int or a sequence > > David G. Parker > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From DParker at chromalloy.com Fri Jul 20 15:24:50 2012 From: DParker at chromalloy.com (DParker at chromalloy.com) Date: Fri, 20 Jul 2012 15:24:50 -0400 Subject: [SciPy-User] KDTree IndexError In-Reply-To: References: Message-ID: Yes but the mailing list bounced back the email with the data attached due to size so I've emailed the file directly to you. I have two examples - one that works and one that causes a failure. Some info on the data: Good Bad Min Max Min Max X -0.0551 0.0569 0.0610 0.0911 Y -0.0540 0.0350 -0.0586 0.0185 Z 1.0589 1.0702 0.9121 1.0624 I'll add that in the original script where I first encountered the KDTree failure there was some pre-processing of the data before building the KDTree: converting units, sorting the coordinates. I've recently eliminated all of the pre-processing and I still encountered the same failure. David G. Parker From: Oleksandr Huziy To: SciPy Users List Date: 07/20/2012 01:02 PM Subject: Re: [SciPy-User] KDTree IndexError Sent by: scipy-user-bounces at scipy.org Hi, is it possible to have x2,y2,z2 to test it? Thanks -- Oleksandr Huziy 2012/7/20 I'm using scipy.spatial.KDTree (version 0.10.1) to perform nearest neighbor interpolation. Recently I've encountered an error when defining a KDTree from certain sets of data. I have not been able to determine what is unique about these data sets and the traceback doesn't provide much of a clue to me. I was hoping someone might be familiar with the problem and provide some insight into what is causing the failure, what to look for in my data set, and how to avoid the failure. The code which generates the traceback is: tree = KDTree(zip(x2,y2,z2)) where x2, y2, and z2 are numpy arrays dtype float64. For one particular data set which fails these have a length of 157,237, shape is (157237,). As I said above I don't know what is unusual about the data that causes the failure to occur. The traceback is copied below: Traceback (most recent call last): File "", line 1, in File "", line 75, in interactivemode File "C:\Python26\lib\site-packages\fluenttools\profile.py", line 740, in commonnodes tree = KDTree(zip(x2,y2,z2)) File "C:\Python26\lib\site-packages\scipy\spatial\kdtree.py", line 174, in __init__ self.tree = self.__build(np.arange(self.n), self.maxes, self.mins) File "C:\Python26\lib\site-packages\scipy\spatial\kdtree.py", line 238, in __build self.__build(idx[less_idx],lessmaxes,mins), {Prior two lines repeated 974 times} File "C:\Python26\lib\site-packages\scipy\spatial\kdtree.py", line 201, in __build data = self.data[idx] IndexError: index must be either an int or a sequence David G. Parker _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Fri Jul 20 15:39:08 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 20 Jul 2012 21:39:08 +0200 Subject: [SciPy-User] ANN: SciPy 0.11.0 release candidate 1 In-Reply-To: <50096161.8060207@comcast.net> References: <50096161.8060207@comcast.net> Message-ID: On Fri, Jul 20, 2012 at 3:47 PM, John Hassler wrote: > Crashes with "unhandled exception in python.exe" in Python 3.2. The > computer is: > Intel Core2 Quad Q6600 with 3 GB memory > Windows XP Service Pack 3. > I removed and re-installed Scipy, but there was no change in the result. > > Are there other things I could check that might be helpful? > Could you comment out the crashing test and check that the rest of the tests run fine (or report other failures)? The QZ decomposition code was recently added; looks like it needs some fixing. Thanks, Ralf > john > > >>> import scipy > >>> scipy.test(verbose=10) > Running unit tests for scipy > NumPy version 1.6.2 > NumPy is installed in C:\Python32\lib\site-packages\numpy > SciPy version 0.11.0rc1 > SciPy is installed in C:\Python32\lib\site-packages\scipy > Python version 3.2.3 (default, Apr 11 2012, 07:15:24) [MSC v.1500 32 bit > (Intel)] > nose version 1.0.0 > nose.config: INFO: Excluding tests matching ['f2py_ext', 'f2py_f90_ext', > 'gen_ext', 'pyrex_ext', 'swig_ext'] > Tests cophenet(Z) on tdist data set. ... ok > Tests cophenet(Z, Y) on tdist data set. ... ok > > ......... > > test_qz_complex64 (test_decomp.TestQZ) ... ok > test_qz_complex_sort (test_decomp.TestQZ) ... ok > test_qz_double (test_decomp.TestQZ) ... ok > test_qz_double_complex (test_decomp.TestQZ) ... ok > test_qz_double_sort (test_decomp.TestQZ) ... > >>> ================================ RESTART > ================================ > >>> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hasslerjc at comcast.net Fri Jul 20 15:57:35 2012 From: hasslerjc at comcast.net (John Hassler) Date: Fri, 20 Jul 2012 15:57:35 -0400 Subject: [SciPy-User] ANN: SciPy 0.11.0 release candidate 1 In-Reply-To: References: <50096161.8060207@comcast.net> Message-ID: <5009B82F.3070309@comcast.net> An HTML attachment was scrubbed... URL: From hasslerjc at comcast.net Fri Jul 20 16:11:51 2012 From: hasslerjc at comcast.net (John Hassler) Date: Fri, 20 Jul 2012 16:11:51 -0400 Subject: [SciPy-User] ANN: SciPy 0.11.0 release candidate 1 In-Reply-To: References: <50096161.8060207@comcast.net> Message-ID: <5009BB87.3030303@comcast.net> An HTML attachment was scrubbed... URL: From hasslerjc at comcast.net Fri Jul 20 16:24:42 2012 From: hasslerjc at comcast.net (John Hassler) Date: Fri, 20 Jul 2012 16:24:42 -0400 Subject: [SciPy-User] ANN: SciPy 0.11.0 release candidate 1 In-Reply-To: References: <50096161.8060207@comcast.net> Message-ID: <5009BE8A.3080902@comcast.net> An HTML attachment was scrubbed... URL: From DParker at chromalloy.com Fri Jul 20 16:46:16 2012 From: DParker at chromalloy.com (DParker at chromalloy.com) Date: Fri, 20 Jul 2012 16:46:16 -0400 Subject: [SciPy-User] Fwd: KDTree IndexError In-Reply-To: References: Message-ID: Here it is. https://bitbucket.org/parkerdg/kdtree-failure Thank you for your help! David G. Parker From: Oleksandr Huziy To: DParker at chromalloy.com Date: 07/20/2012 04:07 PM Subject: Fwd: [SciPy-User] KDTree IndexError Hi, I have not received the files. Maybe it's best to upload them somewhere? Cheers -- Oleksandr ---------- Forwarded message ---------- From: Date: 2012/7/20 Subject: Re: [SciPy-User] KDTree IndexError To: SciPy Users List Yes but the mailing list bounced back the email with the data attached due to size so I've emailed the file directly to you. I have two examples - one that works and one that causes a failure. Some info on the data: Good Bad Min Max Min Max X -0.0551 0.0569 0.0610 0.0911 Y -0.0540 0.0350 -0.0586 0.0185 Z 1.0589 1.0702 0.9121 1.0624 I'll add that in the original script where I first encountered the KDTree failure there was some pre-processing of the data before building the KDTree: converting units, sorting the coordinates. I've recently eliminated all of the pre-processing and I still encountered the same failure. David G. Parker From: Oleksandr Huziy To: SciPy Users List Date: 07/20/2012 01:02 PM Subject: Re: [SciPy-User] KDTree IndexError Sent by: scipy-user-bounces at scipy.org Hi, is it possible to have x2,y2,z2 to test it? Thanks -- Oleksandr Huziy 2012/7/20 I'm using scipy.spatial.KDTree (version 0.10.1) to perform nearest neighbor interpolation. Recently I've encountered an error when defining a KDTree from certain sets of data. I have not been able to determine what is unique about these data sets and the traceback doesn't provide much of a clue to me. I was hoping someone might be familiar with the problem and provide some insight into what is causing the failure, what to look for in my data set, and how to avoid the failure. The code which generates the traceback is: tree = KDTree(zip(x2,y2,z2)) where x2, y2, and z2 are numpy arrays dtype float64. For one particular data set which fails these have a length of 157,237, shape is (157237,). As I said above I don't know what is unusual about the data that causes the failure to occur. The traceback is copied below: Traceback (most recent call last): File "", line 1, in File "", line 75, in interactivemode File "C:\Python26\lib\site-packages\fluenttools\profile.py", line 740, in commonnodes tree = KDTree(zip(x2,y2,z2)) File "C:\Python26\lib\site-packages\scipy\spatial\kdtree.py", line 174, in __init__ self.tree = self.__build(np.arange(self.n), self.maxes, self.mins) File "C:\Python26\lib\site-packages\scipy\spatial\kdtree.py", line 238, in __build self.__build(idx[less_idx],lessmaxes,mins), {Prior two lines repeated 974 times} File "C:\Python26\lib\site-packages\scipy\spatial\kdtree.py", line 201, in __build data = self.data[idx] IndexError: index must be either an int or a sequence David G. Parker _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From guziy.sasha at gmail.com Fri Jul 20 18:19:10 2012 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Fri, 20 Jul 2012 18:19:10 -0400 Subject: [SciPy-User] Fwd: KDTree IndexError In-Reply-To: References: Message-ID: Hi, the kdtree implementation uses recursion, and the standard limit of the recursion depth in python is 1000. For some of your data you need more than 1000 (for the bad data). When I put import sys sys.setrecursionlimit(10000) your bad data script works OK. Probably it would work with less... Cheers -- Oleksandr 2012/7/20 > Here it is. https://bitbucket.org/parkerdg/kdtree-failure > > Thank you for your help! > > *David G. Parker > * > > > From: Oleksandr Huziy > To: DParker at chromalloy.com > Date: 07/20/2012 04:07 PM > Subject: Fwd: [SciPy-User] KDTree IndexError > ------------------------------ > > > > Hi, > > I have not received the files. Maybe it's best to upload them somewhere? > > Cheers > -- > Oleksandr > > ---------- Forwarded message ---------- > From: <*DParker at chromalloy.com* > > Date: 2012/7/20 > Subject: Re: [SciPy-User] KDTree IndexError > To: SciPy Users List <*scipy-user at scipy.org* > > > > Yes but the mailing list bounced back the email with the data attached due > to size so I've emailed the file directly to you. > > I have two examples - one that works and one that causes a failure. > > Some info on the data: > Good Bad > Min Max Min Max > X -0.0551 0.0569 0.0610 0.0911 > Y -0.0540 0.0350 -0.0586 0.0185 > Z 1.0589 1.0702 0.9121 1.0624 > > I'll add that in the original script where I first encountered the KDTree > failure there was some pre-processing of the data before building the > KDTree: converting units, sorting the coordinates. I've recently eliminated > all of the pre-processing and I still encountered the same failure. > * > David G. Parker* > > > > From: Oleksandr Huziy <*guziy.sasha at gmail.com* > > > To: SciPy Users List <*scipy-user at scipy.org* > > > Date: 07/20/2012 01:02 PM > Subject: Re: [SciPy-User] KDTree IndexError > Sent by: *scipy-user-bounces at scipy.org* > ------------------------------ > > > > > Hi, > > is it possible to have x2,y2,z2 to test it? > > Thanks > -- > Oleksandr Huziy > > 2012/7/20 <*DParker at chromalloy.com* > > I'm using scipy.spatial.KDTree (version 0.10.1) to perform nearest > neighbor interpolation. Recently I've encountered an error when defining a > KDTree from certain sets of data. I have not been able to determine what is > unique about these data sets and the traceback doesn't provide much of a > clue to me. I was hoping someone might be familiar with the problem and > provide some insight into what is causing the failure, what to look for in > my data set, and how to avoid the failure. > > The code which generates the traceback is: > tree = KDTree(zip(x2,y2,z2)) > > where x2, y2, and z2 are numpy arrays dtype float64. For one particular > data set which fails these have a length of 157,237, shape is (157237,). > > > As I said above I don't know what is unusual about the data that causes > the failure to occur. The traceback is copied below: > > Traceback (most recent call last): > File "", line 1, in > File "", line 75, in interactivemode > File "C:\Python26\lib\site-packages\fluenttools\profile.py", line 740, > in commonnodes > tree = KDTree(zip(x2,y2,z2)) > File "C:\Python26\lib\site-packages\scipy\spatial\kdtree.py", line 174, > in __init__ > self.tree = self.__build(np.arange(self.n), self.maxes, self.mins) > File "C:\Python26\lib\site-packages\scipy\spatial\kdtree.py", line 238, > in __build > self.__build(idx[less_idx],lessmaxes,mins), > > {Prior two lines repeated 974 times} > > File "C:\Python26\lib\site-packages\scipy\spatial\kdtree.py", line 201, > in __build > data = self.data[idx] > IndexError: index must be either an int or a sequence > > David G. Parker > _______________________________________________ > SciPy-User mailing list* > **SciPy-User at scipy.org* * > **http://mail.scipy.org/mailman/listinfo/scipy-user* > > _______________________________________________ > SciPy-User mailing list* > **SciPy-User at scipy.org* * > **http://mail.scipy.org/mailman/listinfo/scipy-user* > > > _______________________________________________ > SciPy-User mailing list* > **SciPy-User at scipy.org* * > **http://mail.scipy.org/mailman/listinfo/scipy-user* > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahuwang at qq.com Sat Jul 21 03:52:55 2012 From: shahuwang at qq.com (=?gb18030?B?zfXI8Mba?=) Date: Sat, 21 Jul 2012 15:52:55 +0800 Subject: [SciPy-User] Some thing wrong in scipy reference Message-ID: http://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.vq.whiten.html#scipy.cluster.vq.whiten In the bottom of this page,the result of the whiten function is wrong. ------------------ ?????????? ?????????? ?????????www.shahuwang.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sat Jul 21 04:02:10 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 21 Jul 2012 10:02:10 +0200 Subject: [SciPy-User] Some thing wrong in scipy reference In-Reply-To: References: Message-ID: 2012/7/21 ??? > > http://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.vq.whiten.html#scipy.cluster.vq.whiten > > In the bottom of this page,the result of the whiten function is wrong. > You're right, the current output as documented is off by a constant factor: >>> res = whiten(features) >>> res * 3.41250074 / res[0, 0] # reproduce docstring output array([[ 3.41250074, 2.20300046, 5.88897275], [ 2.69407953, 2.39456571, 7.62102356], [ 1.43684242, 0.57469577, 5.88897275]]) Does anyone know why, and is this result correct? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis-bz-gg at t-online.de Sat Jul 21 05:30:53 2012 From: denis-bz-gg at t-online.de (denis) Date: Sat, 21 Jul 2012 11:30:53 +0200 Subject: [SciPy-User] Linear Interpolation In-Reply-To: References: Message-ID: On 19/07/2012 03:39, Tom Harris wrote: > Greetings. > > I would like some help in linear interpolation. Not in doing the > interpolation, but in choosing the points for the interpolation (are > these called interpolants?). The jargon is "knots"; interpolator = UnivariateSpline( x, y, k=1, s=0 ) # s=0 interpolates print "knots:", interpolator.get_knots() For fast simple quadratic interpolation in C (not your question, but quadratic is much smoother than linear and cheaper than cubic), see http://stackoverflow.com/questions/3304513/stretching-out-an-array It's for regularly-spaced x_i; for arbitrary spacing, first binary search. You might try http://stackoverflow.com/questions/tagged/interpolation%20or%20curve-fitting cheers -- denis From vs at it.uu.se Sat Jul 21 07:19:09 2012 From: vs at it.uu.se (Virgil Stokes) Date: Sat, 21 Jul 2012 13:19:09 +0200 Subject: [SciPy-User] Spyder-Ubuntu 12.04 problem with SciPy 0.11.0rc1 Message-ID: <500A902D.8090905@it.uu.se> I installed SciPy 0.11.0rc1 on an Ubuntu 12.04 (64-bit) platform with what seemed to be no problems. However, when I ran Spyder it still was using SciPy 0.10.1. I then tried to uninstall SciPy completely using >pip uninstall SciPy which again seemed to be ok. But, now I was unable to execute Spyder! I tried several times to uninstall and reinstall Spyder but I am stuck with no possibility to run Spyder. I get no error; but, clicking in this application in Ubuntu 12.04 seems to be ignored. Any suggestions as to how to recover Spyder and how to get SciPy running again would be appreciated. From warren.weckesser at enthought.com Sat Jul 21 07:26:45 2012 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sat, 21 Jul 2012 06:26:45 -0500 Subject: [SciPy-User] Some thing wrong in scipy reference In-Reply-To: References: Message-ID: On Sat, Jul 21, 2012 at 3:02 AM, Ralf Gommers wrote: > > > 2012/7/21 ??? > >> >> http://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.vq.whiten.html#scipy.cluster.vq.whiten >> >> In the bottom of this page,the result of the whiten function is wrong. >> > > You're right, the current output as documented is off by a constant factor: > >>> res = whiten(features) > >>> res * 3.41250074 / res[0, 0] # reproduce docstring output > array([[ 3.41250074, 2.20300046, 5.88897275], > [ 2.69407953, 2.39456571, 7.62102356], > [ 1.43684242, 0.57469577, 5.88897275]]) > > Does anyone know why, and is this result correct? > > Ralf > > whiten(obs) returns (in effect) obs / obs.std(axis=0). It appears that the docstring shows obs / obs.std(axis=0, ddof=1): In [14]: obs Out[14]: array([[ 1.9, 2.3, 1.7], [ 1.5, 2.5, 2.2], [ 0.8, 0.6, 1.7]]) In [15]: vq.whiten(obs) Out[15]: array([[ 4.17944278, 2.69811351, 7.21248917], [ 3.29956009, 2.93273208, 9.33380951], [ 1.75976538, 0.7038557 , 7.21248917]]) In [16]: obs / obs.std(axis=0) Out[16]: array([[ 4.17944278, 2.69811351, 7.21248917], [ 3.29956009, 2.93273208, 9.33380951], [ 1.75976538, 0.7038557 , 7.21248917]]) In [17]: obs / obs.std(axis=0, ddof=1) Out[17]: array([[ 3.41250074, 2.20300046, 5.88897275], [ 2.69407953, 2.39456571, 7.62102355], [ 1.43684242, 0.57469577, 5.88897275]]) The whiten function has not changed in a long time. Has the default value of ddof in the numpy std() method changed in the last five (or more) years? Warren > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silva at lma.cnrs-mrs.fr Sat Jul 21 08:38:30 2012 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Sat, 21 Jul 2012 14:38:30 +0200 Subject: [SciPy-User] odeint MemoryError In-Reply-To: References: Message-ID: <1342874310.3217.5.camel@amilo.coursju> Le mercredi 11 juillet 2012 ? 13:05 +0000, Lo?c a ?crit : > Hi, > > I'm trying to run scipy.integrate.odeint on a rather large vector, but obtain a > memory error. > I do not really understand why I'm getting this error as my vector can fit into > memory... > Which scipy internal operation could trigger the error? > Maybe is there some internal parameter to tune? > > Here is a small test program which obtain a MemoryError : > > -- > from scipy import * > from scipy.integrate import * > > n = 45000 Note that odeint bases on lsoda, an algorithm suited for stiff and nonstiff problems. When detecting that the problem is stiff and if you do not provide a banded jacobian matrix, the algorithm need an internal work array with size 22 + 9 * n + n**2 which may be way too big for your memory. The solution may be either to provide a banded jacobian (easy in the toy example you provided) or to use simpler ODE solver as vode (with Adams integrator, not BDF) or Runge-Kutta ones. -- Fabrice Silva LMA UPR CNRS 7051 From pav at iki.fi Fri Jul 20 17:00:15 2012 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 20 Jul 2012 23:00:15 +0200 Subject: [SciPy-User] ANN: SciPy 0.11.0 release candidate 1 In-Reply-To: <5009BE8A.3080902@comcast.net> References: <50096161.8060207@comcast.net> <5009BE8A.3080902@comcast.net> Message-ID: <5009C6DF.2030806@iki.fi> 20.07.2012 22:24, John Hassler kirjoitti: > The offending line is: > AA,BB,Q,Z,sdim = qz(A,B,sort=sort) > > I couldn't find 'sort' defined anywhere, so I arbitrarily changed it to > sort='lhp'. Then it runs, although the test fails. > > Is there something else I can try? Seems to be a problem with the callable sort function then. That it works with sort='lhp' is strange, and probably means that there is not a problem with all callback functions, but something goes wrong in the algorithm. (Which would be expected, if *gees callbacks work without problems.) If you can/know how to recompile, try recompiling with set OPT=-g -DDEBUGCFUNCS python setup.py ......... This should make the f2py wrappers spit out extra information on what is going on. -- Pauli Virtanen From ralf.gommers at googlemail.com Sat Jul 21 10:39:35 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 21 Jul 2012 16:39:35 +0200 Subject: [SciPy-User] Some thing wrong in scipy reference In-Reply-To: References: Message-ID: On Sat, Jul 21, 2012 at 1:26 PM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > > > On Sat, Jul 21, 2012 at 3:02 AM, Ralf Gommers > wrote: > >> >> >> 2012/7/21 ??? >> >>> >>> http://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.vq.whiten.html#scipy.cluster.vq.whiten >>> >>> In the bottom of this page,the result of the whiten function is wrong. >>> >> >> You're right, the current output as documented is off by a constant >> factor: >> >>> res = whiten(features) >> >>> res * 3.41250074 / res[0, 0] # reproduce docstring output >> array([[ 3.41250074, 2.20300046, 5.88897275], >> [ 2.69407953, 2.39456571, 7.62102356], >> [ 1.43684242, 0.57469577, 5.88897275]]) >> >> Does anyone know why, and is this result correct? >> >> Ralf >> >> > > whiten(obs) returns (in effect) obs / obs.std(axis=0). It appears that > the docstring shows obs / obs.std(axis=0, ddof=1): > > In [14]: obs > Out[14]: > array([[ 1.9, 2.3, 1.7], > [ 1.5, 2.5, 2.2], > [ 0.8, 0.6, 1.7]]) > > In [15]: vq.whiten(obs) > Out[15]: > array([[ 4.17944278, 2.69811351, 7.21248917], > [ 3.29956009, 2.93273208, 9.33380951], > [ 1.75976538, 0.7038557 , 7.21248917]]) > > In [16]: obs / obs.std(axis=0) > Out[16]: > array([[ 4.17944278, 2.69811351, 7.21248917], > [ 3.29956009, 2.93273208, 9.33380951], > [ 1.75976538, 0.7038557 , 7.21248917]]) > > In [17]: obs / obs.std(axis=0, ddof=1) > Out[17]: > array([[ 3.41250074, 2.20300046, 5.88897275], > [ 2.69407953, 2.39456571, 7.62102355], > [ 1.43684242, 0.57469577, 5.88897275]]) > > > The whiten function has not changed in a long time. Has the > default value of ddof in the numpy std() method changed in > the last five (or more) years? > Four years ago, in 62e99493, a ddof kw was added. But that looks like a backwards-compatible commit to me. I'll change the output in the whiten docstring. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sat Jul 21 10:43:15 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 21 Jul 2012 16:43:15 +0200 Subject: [SciPy-User] ANN: SciPy 0.11.0 release candidate 1 In-Reply-To: <5009B82F.3070309@comcast.net> References: <50096161.8060207@comcast.net> <5009B82F.3070309@comcast.net> Message-ID: On Fri, Jul 20, 2012 at 9:57 PM, John Hassler wrote: > I commented out the entire function 'def test_qz_double_sort(self):' in > 'test_decomp.py. It ran to the end with one error, one failure. > john > > ====================================================================== > ERROR: Failure: AttributeError ('module' object has no attribute > 'FileType') > > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "C:\Python32\lib\site-packages\nose-1.0.0-py3.2.egg\nose\failure.py", line > 37, in runTest > raise self.exc_class(self.exc_val).with_traceback(self.tb) > File > "C:\Python32\lib\site-packages\nose-1.0.0-py3.2.egg\nose\loader.py", line > 390, in loadTestsFromName > addr.filename, addr.module) > File > "C:\Python32\lib\site-packages\nose-1.0.0-py3.2.egg\nose\importer.py", line > 39, in importFromPath > return self.importFromDir(dir_path, fqname) > File > "C:\Python32\lib\site-packages\nose-1.0.0-py3.2.egg\nose\importer.py", line > 86, in importFromDir > mod = load_module(part_fqname, fh, filename, desc) > File "C:\Python32\lib\site-packages\scipy\weave\__init__.py", line 22, > in > from .blitz_tools import blitz > File "C:\Python32\lib\site-packages\scipy\weave\blitz_tools.py", line 6, > in > from . import converters > File "C:\Python32\lib\site-packages\scipy\weave\converters.py", line 19, > in > c_spec.file_converter(), > File "C:\Python32\lib\site-packages\scipy\weave\c_spec.py", line 74, in > __init__ > self.init_info() > File "C:\Python32\lib\site-packages\scipy\weave\c_spec.py", line 264, in > init_info > self.matching_types = [types.FileType] > AttributeError: 'module' object has no attribute 'FileType' > > This is due to weave not being python 3.x compatible. The next RC will raise a clearer error for this. > ====================================================================== > FAIL: test_definition (test_real_transforms.TestDSTIDouble) > > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "C:\Python32\lib\site-packages\scipy\fftpack\tests\test_real_transforms.py", > line 213, in test_definition > err_msg="Size %d failed" % i) > File "C:\Python32\lib\site-packages\numpy\testing\utils.py", line 800, > in assert_array_almost_equal > header=('Arrays are not almost equal to %d decimals' % decimal)) > File "C:\Python32\lib\site-packages\numpy\testing\utils.py", line 636, > in assert_array_compare > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 15 decimals > Size 256 failed > (mismatch 3.515625%) > x: array([ 1.00000000e+00, -5.03902743e-01, 3.33300126e-01, > -2.51913719e-01, 1.99940224e-01, -1.67900640e-01, > 1.42771743e-01, -1.25881543e-01, 1.11000399e-01,... > y: array([ 1.00000000e+00, -5.03902743e-01, 3.33300126e-01, > -2.51913719e-01, 1.99940224e-01, -1.67900640e-01, > 1.42771743e-01, -1.25881543e-01, 1.11000399e-01,... > This looks like the test precision being a little too high. Could you adjust it to decimal=14 or lower, and tell us when the test passes on your machine? Thanks, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sat Jul 21 10:59:29 2012 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sat, 21 Jul 2012 09:59:29 -0500 Subject: [SciPy-User] Some thing wrong in scipy reference In-Reply-To: References: Message-ID: On Sat, Jul 21, 2012 at 9:39 AM, Ralf Gommers wrote: > > > On Sat, Jul 21, 2012 at 1:26 PM, Warren Weckesser < > warren.weckesser at enthought.com> wrote: > >> >> >> On Sat, Jul 21, 2012 at 3:02 AM, Ralf Gommers < >> ralf.gommers at googlemail.com> wrote: >> >>> >>> >>> 2012/7/21 ??? >>> >>>> >>>> http://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.vq.whiten.html#scipy.cluster.vq.whiten >>>> >>>> In the bottom of this page,the result of the whiten function is wrong. >>>> >>> >>> You're right, the current output as documented is off by a constant >>> factor: >>> >>> res = whiten(features) >>> >>> res * 3.41250074 / res[0, 0] # reproduce docstring output >>> array([[ 3.41250074, 2.20300046, 5.88897275], >>> [ 2.69407953, 2.39456571, 7.62102356], >>> [ 1.43684242, 0.57469577, 5.88897275]]) >>> >>> Does anyone know why, and is this result correct? >>> >>> Ralf >>> >>> >> >> whiten(obs) returns (in effect) obs / obs.std(axis=0). It appears that >> the docstring shows obs / obs.std(axis=0, ddof=1): >> >> In [14]: obs >> Out[14]: >> array([[ 1.9, 2.3, 1.7], >> [ 1.5, 2.5, 2.2], >> [ 0.8, 0.6, 1.7]]) >> >> In [15]: vq.whiten(obs) >> Out[15]: >> array([[ 4.17944278, 2.69811351, 7.21248917], >> [ 3.29956009, 2.93273208, 9.33380951], >> [ 1.75976538, 0.7038557 , 7.21248917]]) >> >> In [16]: obs / obs.std(axis=0) >> Out[16]: >> array([[ 4.17944278, 2.69811351, 7.21248917], >> [ 3.29956009, 2.93273208, 9.33380951], >> [ 1.75976538, 0.7038557 , 7.21248917]]) >> >> In [17]: obs / obs.std(axis=0, ddof=1) >> Out[17]: >> array([[ 3.41250074, 2.20300046, 5.88897275], >> [ 2.69407953, 2.39456571, 7.62102355], >> [ 1.43684242, 0.57469577, 5.88897275]]) >> >> >> The whiten function has not changed in a long time. Has the >> default value of ddof in the numpy std() method changed in >> the last five (or more) years? >> > > Four years ago, in 62e99493, a ddof kw was added. But that looks like a > backwards-compatible commit to me. I'll change the output in the whiten > docstring. > > Ralf > > While you're at it, could you fix the spelling mistake ("devation") in the description of the return value? Thanks! Warren > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sat Jul 21 11:09:30 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 21 Jul 2012 17:09:30 +0200 Subject: [SciPy-User] Some thing wrong in scipy reference In-Reply-To: References: Message-ID: On Sat, Jul 21, 2012 at 4:59 PM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > > > On Sat, Jul 21, 2012 at 9:39 AM, Ralf Gommers > wrote: > >> >> >> On Sat, Jul 21, 2012 at 1:26 PM, Warren Weckesser < >> warren.weckesser at enthought.com> wrote: >> >>> >>> >>> On Sat, Jul 21, 2012 at 3:02 AM, Ralf Gommers < >>> ralf.gommers at googlemail.com> wrote: >>> >>>> >>>> >>>> 2012/7/21 ??? >>>> >>>>> >>>>> http://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.vq.whiten.html#scipy.cluster.vq.whiten >>>>> >>>>> In the bottom of this page,the result of the whiten function is wrong. >>>>> >>>> >>>> You're right, the current output as documented is off by a constant >>>> factor: >>>> >>> res = whiten(features) >>>> >>> res * 3.41250074 / res[0, 0] # reproduce docstring output >>>> array([[ 3.41250074, 2.20300046, 5.88897275], >>>> [ 2.69407953, 2.39456571, 7.62102356], >>>> [ 1.43684242, 0.57469577, 5.88897275]]) >>>> >>>> Does anyone know why, and is this result correct? >>>> >>>> Ralf >>>> >>>> >>> >>> whiten(obs) returns (in effect) obs / obs.std(axis=0). It appears that >>> the docstring shows obs / obs.std(axis=0, ddof=1): >>> >>> In [14]: obs >>> Out[14]: >>> array([[ 1.9, 2.3, 1.7], >>> [ 1.5, 2.5, 2.2], >>> [ 0.8, 0.6, 1.7]]) >>> >>> In [15]: vq.whiten(obs) >>> Out[15]: >>> array([[ 4.17944278, 2.69811351, 7.21248917], >>> [ 3.29956009, 2.93273208, 9.33380951], >>> [ 1.75976538, 0.7038557 , 7.21248917]]) >>> >>> In [16]: obs / obs.std(axis=0) >>> Out[16]: >>> array([[ 4.17944278, 2.69811351, 7.21248917], >>> [ 3.29956009, 2.93273208, 9.33380951], >>> [ 1.75976538, 0.7038557 , 7.21248917]]) >>> >>> In [17]: obs / obs.std(axis=0, ddof=1) >>> Out[17]: >>> array([[ 3.41250074, 2.20300046, 5.88897275], >>> [ 2.69407953, 2.39456571, 7.62102355], >>> [ 1.43684242, 0.57469577, 5.88897275]]) >>> >>> >>> The whiten function has not changed in a long time. Has the >>> default value of ddof in the numpy std() method changed in >>> the last five (or more) years? >>> >> >> Four years ago, in 62e99493, a ddof kw was added. But that looks like a >> backwards-compatible commit to me. I'll change the output in the whiten >> docstring. >> >> Ralf >> >> > While you're at it, could you fix the spelling mistake ("devation") in the > description of the return value? Thanks! > Sure, done. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From hasslerjc at comcast.net Sat Jul 21 12:06:18 2012 From: hasslerjc at comcast.net (John Hassler) Date: Sat, 21 Jul 2012 12:06:18 -0400 Subject: [SciPy-User] ANN: SciPy 0.11.0 release candidate 1 In-Reply-To: References: <50096161.8060207@comcast.net> <5009B82F.3070309@comcast.net> Message-ID: <500AD37A.8090800@comcast.net> An HTML attachment was scrubbed... URL: From hasslerjc at comcast.net Sat Jul 21 13:21:36 2012 From: hasslerjc at comcast.net (John Hassler) Date: Sat, 21 Jul 2012 13:21:36 -0400 Subject: [SciPy-User] ANN: SciPy 0.11.0 release candidate 1 In-Reply-To: <5009C6DF.2030806@iki.fi> References: <50096161.8060207@comcast.net> <5009BE8A.3080902@comcast.net> <5009C6DF.2030806@iki.fi> Message-ID: <500AE520.6030507@comcast.net> On 7/20/2012 5:00 PM, Pauli Virtanen wrote: > 20.07.2012 22:24, John Hassler kirjoitti: >> The offending line is: >> AA,BB,Q,Z,sdim = qz(A,B,sort=sort) >> >> I couldn't find 'sort' defined anywhere, so I arbitrarily changed it to >> sort='lhp'. Then it runs, although the test fails. >> >> Is there something else I can try? > Seems to be a problem with the callable sort function then. > > That it works with sort='lhp' is strange, and probably means that there > is not a problem with all callback functions, but something goes wrong > in the algorithm. (Which would be expected, if *gees callbacks work > without problems.) > > If you can/know how to recompile, try recompiling with > > set OPT=-g -DDEBUGCFUNCS > python setup.py ......... > > This should make the f2py wrappers spit out extra information on what is > going on. > I'm not set up to recompile. I set up a little test program to reproduce the problem (below). It leads to the Fortran call to gges in _decomp_qz.py. This crashes if sort_t = 1 (which calls the lambda), but not for sort_t = 0. I didn't play with it any further. john # Test qz import numpy as np from scipy.linalg import qz A = np.array([[3.9, 12.5, -34.5, -0.5], [ 4.3, 21.5, -47.5, 7.5], [ 4.3, 21.5, -43.5, 3.5], [ 4.4, 26.0, -46.0, 6.0 ]]) B = np.array([[ 1.0, 2.0, -3.0, 1.0], [1.0, 3.0, -5.0, 4.0], [1.0, 3.0, -4.0, 3.0], [1.0, 3.0, -4.0, 4.0]]) #sort = lambda ar,ai,beta : ai == 0 ## crashes sort = None ## runs print(qz(A,B,sort=sort)) From ralf.gommers at googlemail.com Sat Jul 21 16:00:03 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 21 Jul 2012 22:00:03 +0200 Subject: [SciPy-User] Fwd: KDTree IndexError In-Reply-To: References: Message-ID: On Sat, Jul 21, 2012 at 12:19 AM, Oleksandr Huziy wrote: > Hi, > > the kdtree implementation uses recursion, and the standard limit of the > recursion depth in python is 1000. > > For some of your data you need more than 1000 (for the bad data). > > When I put > > import sys > > sys.setrecursionlimit(10000) > > your bad data script works OK. Probably it would work with less... > > Should we put that in the Notes section of the KDTree docstring? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From guziy.sasha at gmail.com Sat Jul 21 16:34:32 2012 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Sat, 21 Jul 2012 16:34:32 -0400 Subject: [SciPy-User] Fwd: KDTree IndexError In-Reply-To: References: Message-ID: Hi I've tested this solution on iMac (python2.6) and Ubuntu (python2.7), and it worked. On Ubuntu I got the following message for the initial script: RuntimeError: maximum recursion depth exceeded while calling a Python object which has lead me to the solution. It would be helpful to have the warning in the docstring just for the cases when the RuntimeError is not raised. Maybe just wait the DParker's confirmation that the solution works for all of his data. I also tried changing the leafsize parameter but that did not seem to have any effect. Cheers -- Oleksandr Huziy 2012/7/21 Ralf Gommers > > > On Sat, Jul 21, 2012 at 12:19 AM, Oleksandr Huziy wrote: > >> Hi, >> >> the kdtree implementation uses recursion, and the standard limit of the >> recursion depth in python is 1000. >> >> For some of your data you need more than 1000 (for the bad data). >> >> When I put >> >> import sys >> >> sys.setrecursionlimit(10000) >> >> your bad data script works OK. Probably it would work with less... >> >> > Should we put that in the Notes section of the KDTree docstring? > > Ralf > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From helmrp at yahoo.com Sat Jul 21 19:09:02 2012 From: helmrp at yahoo.com (The Helmbolds) Date: Sat, 21 Jul 2012 16:09:02 -0700 (PDT) Subject: [SciPy-User] Documentation Message-ID: <1342912142.98142.YahooMailNeo@web31802.mail.mud.yahoo.com> I'm interested in contributing to SciPy's documentation, but am?uncertain who my point of contact should be. Please enlighten me. ? FYI: BSc, Physics,?Carnegie-Mellon U, 1953 PhD, Math, Carnegie-Mellon U, 1957 Have written a few dozen technical reports, a handful of papers published in technical journals, ???? helped edit/revise/rewrite an earlier version of SimPy, recently rewrote in toto our retirement ???? community's library handbook. The Helmbolds 2645 E Southern Ave A241 Tempe AZ 85282 Email: helmrp at yahoo.com VOX: 480-831-3611 CELL Igor: 480-438-3918 CELL Alf: 602-568-6948 (but not often turned on) -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.sinclair.za at gmail.com Sun Jul 22 03:14:19 2012 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Sun, 22 Jul 2012 09:14:19 +0200 Subject: [SciPy-User] Spyder-Ubuntu 12.04 problem with SciPy 0.11.0rc1 In-Reply-To: <500A902D.8090905@it.uu.se> References: <500A902D.8090905@it.uu.se> Message-ID: On Jul 21, 2012 1:19 PM, "Virgil Stokes" wrote: > > I installed SciPy 0.11.0rc1 on an Ubuntu 12.04 (64-bit) platform with what > seemed to be no problems. > However, when I ran Spyder it still was using SciPy 0.10.1. I then tried to > uninstall SciPy completely using > > >pip uninstall SciPy > > which again seemed to be ok. But, now I was unable to execute Spyder! I tried > several times to uninstall and reinstall Spyder but I am stuck with no > possibility to run Spyder. I get no error; but, clicking in this application in > Ubuntu 12.04 seems to be ignored. > > Any suggestions as to how to recover Spyder and how to get SciPy running again > would be appreciated. I don't know about Spyder (I don't use it and you haven't given much information about how you tried to un/reinstall it). I suspect it was via the package manager? Did you run pip uninstall with sudo? What happens when you run: python -c "import scipy; print scipy.__version__; print scipy" at the command line? If you get an import error for scipy try reinstalling python-scipy from the package manager. Cheers, Scott -------------- next part -------------- An HTML attachment was scrubbed... URL: From vs at it.uu.se Sun Jul 22 04:15:38 2012 From: vs at it.uu.se (Virgil Stokes) Date: Sun, 22 Jul 2012 10:15:38 +0200 Subject: [SciPy-User] Spyder-Ubuntu 12.04 problem with SciPy 0.11.0rc1 In-Reply-To: References: <500A902D.8090905@it.uu.se> Message-ID: <500BB6AA.1020804@it.uu.se> On 22-Jul-2012 09:14, Scott Sinclair wrote: > > On Jul 21, 2012 1:19 PM, "Virgil Stokes" > wrote: > > > > I installed SciPy 0.11.0rc1 on an Ubuntu 12.04 (64-bit) platform with what > > seemed to be no problems. > > However, when I ran Spyder it still was using SciPy 0.10.1. I then tried to > > uninstall SciPy completely using > > > > >pip uninstall SciPy > > > > which again seemed to be ok. But, now I was unable to execute Spyder! I tried > > several times to uninstall and reinstall Spyder but I am stuck with no > > possibility to run Spyder. I get no error; but, clicking in this application in > > Ubuntu 12.04 seems to be ignored. > > > > Any suggestions as to how to recover Spyder and how to get SciPy running again > > would be appreciated. > > I don't know about Spyder (I don't use it and you haven't given much > information about how you tried to un/reinstall it). I suspect it was via the > package manager? > Yes, the first time that I installed it was via the package manager, and it worked as expected. > > Did you run pip uninstall with sudo? > I tried both with pip and via the package manager. When this didn't work I also removed all references to Spyder. > > What happens when you run: > > python -c "import scipy; print scipy.__version__; print scipy" > > at the command line? > root at virgil-laptop:/home/virgil# python -c "import scipy;print scipy.__version__;print scipy" 0.10.1 > > If you get an import error for scipy try reinstalling python-scipy from the > package manager. > > Cheers, > Scott > Ok thanks Scott for your assistance with this problem. Note, I have tried several different methods to uninstall/install Spyder (e.g. package manager and pip) but I still never get the Spyder splash window :-( -------------- next part -------------- An HTML attachment was scrubbed... URL: From vs at it.uu.se Sun Jul 22 05:20:35 2012 From: vs at it.uu.se (Virgil Stokes) Date: Sun, 22 Jul 2012 11:20:35 +0200 Subject: [SciPy-User] Spyder-Ubuntu 12.04 problem with SciPy 0.11.0rc1 In-Reply-To: References: <500A902D.8090905@it.uu.se> Message-ID: <500BC5E3.2000406@it.uu.se> On 22-Jul-2012 09:14, Scott Sinclair wrote: > > On Jul 21, 2012 1:19 PM, "Virgil Stokes" > wrote: > > > > I installed SciPy 0.11.0rc1 on an Ubuntu 12.04 (64-bit) platform with what > > seemed to be no problems. > > However, when I ran Spyder it still was using SciPy 0.10.1. I then tried to > > uninstall SciPy completely using > > > > >pip uninstall SciPy > > > > which again seemed to be ok. But, now I was unable to execute Spyder! I tried > > several times to uninstall and reinstall Spyder but I am stuck with no > > possibility to run Spyder. I get no error; but, clicking in this application in > > Ubuntu 12.04 seems to be ignored. > > > > Any suggestions as to how to recover Spyder and how to get SciPy running again > > would be appreciated. > > I don't know about Spyder (I don't use it and you haven't given much > information about how you tried to un/reinstall it). I suspect it was via the > package manager? > > Did you run pip uninstall with sudo? > > What happens when you run: > > python -c "import scipy; print scipy.__version__; print scipy" > > at the command line? > > If you get an import error for scipy try reinstalling python-scipy from the > package manager. > > Cheers, > Scott > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Ok Scott, I tried some more things; but, first here is what I get when I try to run Spyder from a command line: root at virgil-laptop:/usr/local/bin# spyder Traceback (most recent call last): File "/usr/local/bin/spyder", line 2, in from spyderlib import spyder File "/usr/local/lib/python2.7/dist-packages/spyderlib/spyder.py", line 91, in from spyderlib.utils.environ import WinUserEnvDialog File "/usr/local/lib/python2.7/dist-packages/spyderlib/utils/environ.py", line 17, in from spyderlib.widgets.dicteditor import DictEditor File "/usr/local/lib/python2.7/dist-packages/spyderlib/widgets/dicteditor.py", line 31, in from spyderlib.config import get_icon, get_font File "/usr/local/lib/python2.7/dist-packages/spyderlib/config.py", line 27, in from spyderlib.utils import iofuncs, codeanalysis File "/usr/local/lib/python2.7/dist-packages/spyderlib/utils/iofuncs.py", line 33, in import scipy.io as spio File "/usr/local/lib/python2.7/dist-packages/scipy/io/__init__.py", line 83, in from matlab import loadmat, savemat, byteordercodes File "/usr/local/lib/python2.7/dist-packages/scipy/io/matlab/__init__.py", line 11, in from mio import loadmat, savemat File "/usr/local/lib/python2.7/dist-packages/scipy/io/matlab/mio.py", line 15, in from mio4 import MatFile4Reader, MatFile4Writer File "/usr/local/lib/python2.7/dist-packages/scipy/io/matlab/mio4.py", line 9, in import scipy.sparse File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/__init__.py", line 182, in from csgraph import * AttributeError: 'module' object has no attribute 'csgraph_to_masked' +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ I installed scipy-0.10.1 and then tried to run Spyder --- the following crash report was generated: The crashed program seems to use third-party or local libraries: /usr/local/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_coo.so /usr/local/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_csc.so /usr/local/lib/python2.7/dist-packages/scipy/sparse/csgraph/_shortest_path.so /usr/local/lib/python2.7/dist-packages/numpy/linalg/lapack_lite.so /usr/local/lib/python2.7/dist-packages/scipy/sparse/csgraph/_tools.so /usr/local/lib/python2.7/dist-packages/numpy/random/mtrand.so /usr/local/lib/python2.7/dist-packages/numpy/core/scalarmath.so /usr/local/lib/python2.7/dist-packages/numpy/core/multiarray.so /usr/local/lib/python2.7/dist-packages/numpy/lib/_compiled_base.so /usr/local/lib/python2.7/dist-packages/scipy/sparse/csgraph/_min_spanning_tree.so /usr/local/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_bsr.so /usr/local/lib/python2.7/dist-packages/numpy/core/_dotblas.so /usr/local/lib/python2.7/dist-packages/numpy/core/umath.so /usr/local/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_csgraph.so /usr/local/lib/python2.7/dist-packages/numpy/fft/fftpack_lite.so /usr/local/lib/python2.7/dist-packages/numpy/core/_sort.so /usr/local/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_csr.so /usr/local/lib/python2.7/dist-packages/scipy/sparse/csgraph/_traversal.so /usr/local/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_dia.so It is highly recommended to check if the problem persists without those first. Do you want to continue the report process anyway? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Interestingly, a search for csgraph in the file system gives nothing. Do you have any suggestions as to how to fix Spyder-SciPy? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibarak2000 at yahoo.com Sun Jul 22 12:01:07 2012 From: ibarak2000 at yahoo.com (ilan barak) Date: Sun, 22 Jul 2012 19:01:07 +0300 Subject: [SciPy-User] fmin_l_bfgs_b pass ndarray with an array Message-ID: <500C23C3.9010606@yahoo.com> An HTML attachment was scrubbed... URL: From ibarak2000 at yahoo.com Sun Jul 22 12:11:13 2012 From: ibarak2000 at yahoo.com (ilan barak) Date: Sun, 22 Jul 2012 19:11:13 +0300 Subject: [SciPy-User] fmin_l_bfgs_b pass ndarray with an array Message-ID: <500C2621.9040909@yahoo.com> An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sun Jul 22 12:18:14 2012 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 22 Jul 2012 11:18:14 -0500 Subject: [SciPy-User] fmin_l_bfgs_b pass ndarray with an array In-Reply-To: <500C23C3.9010606@yahoo.com> References: <500C23C3.9010606@yahoo.com> Message-ID: On Sun, Jul 22, 2012 at 11:01 AM, ilan barak wrote: > Hello, > I apologize for the long description, this is the best I can do... > > I have two functions defined as: > > def hypothesis(params): > # build a series of length params[0], with starting gap params[1] , > repitition params[2] with sclae params[3] > # using tt as the basic shape > result=np.zeros(params[0]) > for index in np.arange(params[1],params[0]-tt.shape[0],params[2]): > result[int(round(index)):int(round(index))+tt.shape[0]]=tt > return params[3]*result > > def Cost(params,sig): # starting gap params[0] , repitition params[1] > with sclae params[2] > # sig is signa;s to compare to > # calculate error cost > result=np.linalg.norm(sig-hypothesis([sig.shape[0]]+params)) > return result > > The Cost function requires a 3 parameter list and a signal that is a 800 > long ndarray > Running the Cost with: > params=([start,gap,0.04]) > Cost(params,mysig5) , where mysig 5 is length 800 ndarray works fine. > However: > p0 = np.array([10.,20.,0.01]) # Initial guess for the parameters > start,gap,scale > mybounds = [(0,20), (10,25), (0.001,0.1)] > x, f, d= optimize.fmin_l_bfgs_b(Cost, p0[:],fprime=None, args=mysig5, > bounds=mybounds, approx_grad=True) > > complains: > > Traceback > C:\Users\ilan\Documents\python\opt_detection_danny1.py > 130 > fmin_l_bfgs_b > C:\Python27\lib\site-packages\scipy\optimize\lbfgsb.py 199 > func_and_grad > C:\Python27\lib\site-packages\scipy\optimize\lbfgsb.py 145 > TypeError: Cost() takes exactly 2 arguments (801 given) > > Where am I wrong > > Use a *tuple* containing the single argument mysig5 for the `args` argument: x, f, d = optimize.fmin_l_bfgs_b(Cost, p0[:],fprime=None, args=(mysig5,), bounds=mybounds, approx_grad=True) Warren thanks > > Ilan > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibarak2000 at yahoo.com Sun Jul 22 12:22:47 2012 From: ibarak2000 at yahoo.com (ilan barak) Date: Sun, 22 Jul 2012 09:22:47 -0700 (PDT) Subject: [SciPy-User] optimize.fmin_l_bfgs_b wrong number of arguments Message-ID: <1342974167.30673.YahooMailNeo@web125404.mail.ne1.yahoo.com> Hello, I apologize for the long description, this is the best I can do... I have two functions defined as: def hypothesis(params): ??? # build a series of length params[0], with starting gap params[1] , repitition params[2] with sclae params[3] ??? # using tt as the basic shape ??? result=np.zeros(params[0]) ??? for index in np.arange(params[1],params[0]-tt.shape[0],params[2]): ??????? result[int(round(index)):int(round(index))+tt.shape[0]]=tt ??? return params[3]*result def Cost(params,sig): #? starting gap params[0] , repitition params[1] with sclae params[2] ??? # sig is signa;s to compare to ??? # calculate error cost ??? result=np.linalg.norm(sig-hypothesis([sig.shape[0]]+params)) ??? return result The Cost function requires a 3 parameter list and a signal that is a 800 long ndarray Running the Cost with: params=([start,gap,0.04]) Cost(params,mysig5) , where mysig 5 is length 800 ndarray works fine. However: p0 = np.array([10.,20.,0.01]) # Initial guess for the parameters start,gap,scale mybounds = [(0,20), (10,25), (0.001,0.1)] x, f, d= optimize.fmin_l_bfgs_b(Cost, p0[:],fprime=None, args=mysig5, bounds=mybounds, approx_grad=True) complains: Traceback????????????? ? ??? ??? C:\Users\ilan\Documents\python\opt_detection_danny1.py??? 130????? ? ??? fmin_l_bfgs_b??? C:\Python27\lib\site-packages\scipy\optimize\lbfgsb.py??? 199????? ? ??? func_and_grad??? C:\Python27\lib\site-packages\scipy\optimize\lbfgsb.py??? 145????? ? TypeError: Cost() takes exactly 2 arguments (801 given)????????????? ? Where am I wrong thanks Ilan From josef.pktd at gmail.com Sun Jul 22 12:30:08 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 22 Jul 2012 12:30:08 -0400 Subject: [SciPy-User] optimize.fmin_l_bfgs_b wrong number of arguments In-Reply-To: <1342974167.30673.YahooMailNeo@web125404.mail.ne1.yahoo.com> References: <1342974167.30673.YahooMailNeo@web125404.mail.ne1.yahoo.com> Message-ID: On Sun, Jul 22, 2012 at 12:22 PM, ilan barak wrote: > Hello, > I apologize for the long description, this is the best I can do... > > I have two functions defined as: > > def hypothesis(params): > # build a series of length params[0], with starting gap > params[1] , repitition params[2] with sclae params[3] > # using tt as the basic shape > result=np.zeros(params[0]) > for index in > np.arange(params[1],params[0]-tt.shape[0],params[2]): > result[int(round(index)):int(round(index))+tt.shape[0]]=tt > return params[3]*result > > def Cost(params,sig): # starting gap params[0] , repitition > params[1] with sclae params[2] > # sig is signa;s to compare to > # calculate error cost > result=np.linalg.norm(sig-hypothesis([sig.shape[0]]+params)) > return result > > The Cost function requires a 3 parameter list and a signal that is a > 800 long ndarray > Running the Cost with: > params=([start,gap,0.04]) > Cost(params,mysig5) , where mysig 5 is length 800 ndarray works > fine. > However: > p0 = np.array([10.,20.,0.01]) # Initial guess for the parameters > start,gap,scale > mybounds = [(0,20), (10,25), (0.001,0.1)] > x, f, d= optimize.fmin_l_bfgs_b(Cost, p0[:],fprime=None, > args=mysig5, bounds=mybounds, approx_grad=True) args=(mysig5,) args should be a tuple I think Josef > > complains: > > Traceback > > C:\Users\ilan\Documents\python\opt_detection_danny1.py 130 > fmin_l_bfgs_b > C:\Python27\lib\site-packages\scipy\optimize\lbfgsb.py 199 > func_and_grad > C:\Python27\lib\site-packages\scipy\optimize\lbfgsb.py 145 > TypeError: Cost() takes exactly 2 arguments (801 given) > > > Where am I wrong > > thanks > > Ilan > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From ralf.gommers at googlemail.com Sun Jul 22 15:33:39 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 22 Jul 2012 21:33:39 +0200 Subject: [SciPy-User] Documentation In-Reply-To: <1342912142.98142.YahooMailNeo@web31802.mail.mud.yahoo.com> References: <1342912142.98142.YahooMailNeo@web31802.mail.mud.yahoo.com> Message-ID: On Sun, Jul 22, 2012 at 1:09 AM, The Helmbolds wrote: > I'm interested in contributing to SciPy's documentation, but am uncertain > who my point of contact should be. > Please enlighten me. > That's great, welcome! As for a point of contact, usually the best idea is posting to this list, in case of any questions or issues you need feedback on or help with. Do you have an interest in a certain module, or preference for improving docstrings vs. contributing tutorial-style docs? Once we know that we could give you some more specific ideas of things that need work. As for the way to contribute, there are two for documentation: - we have a wiki-style editor where you can edit all docs: http://docs.scipy.org/scipy/Milestones/. There's some info for getting started at http://docs.scipy.org/numpy/Front%20Page/. You'll need to ask for edit permissions on this list after registering a username there. - you can send pull requests with changes on Github. You can find useful info related to that at https://github.com/scipy/scipy/blob/master/HACKING.rst.txt Cheers, Ralf > FYI: > BSc, Physics, Carnegie-Mellon U, 1953 > PhD, Math, Carnegie-Mellon U, 1957 > Have written a few dozen technical reports, a handful of papers published > in technical journals, > helped edit/revise/rewrite an earlier version of SimPy, recently > rewrote in toto our retirement > community's library handbook. > > The Helmbolds > 2645 E Southern Ave A241 > Tempe AZ 85282 > Email: helmrp at yahoo.com > VOX: 480-831-3611 > CELL Igor: 480-438-3918 > CELL Alf: 602-568-6948 (but not often turned on) > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wesmckinn at gmail.com Sun Jul 22 16:26:12 2012 From: wesmckinn at gmail.com (Wes McKinney) Date: Sun, 22 Jul 2012 16:26:12 -0400 Subject: [SciPy-User] ANN: pandas 0.8.1 released Message-ID: hi all, I'm pleased to announce the 0.8.1 release of pandas. This is primarily a bugfix release, though it includes a number of useful new features and performance improvements that make it an immediate recommended upgrade for all pandas users. Some highlights: - Vectorized, NA-friendly string methods (e.g. series.str.replace('foo', 'bar')) - Improved plotting support for time series data - Improved styling capability for DataFrame plots (specifying a dict of styles) and plotting one column versus another - Several new plot types developed as part of GSoC 2012 (see http://pandasplotting.blogspot.com/ for on going developments - Significantly improved parsing performance of ISO8601 datetime strings Thanks to all who contributed to this release, especially Chang She and Vytautas Jan?auskas. As always source archives and Windows installers can be found on PyPI. What's new: http://pandas.pydata.org/pandas-docs/stable/whatsnew.html $ git log v0.8.0..v0.8.1 --pretty=format:%aN | sort | uniq -c | sort -rn 77 Wes McKinney 39 Chang She 11 Vytautas Jancauskas 2 Todd DeLuca 2 Skipper Seabold Happy data hacking! - Wes What is it ========== pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with relational, time series, or any other kind of labeled data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Links ===== Release Notes: http://github.com/pydata/pandas/blob/master/RELEASE.rst Documentation: http://pandas.pydata.org Installers: http://pypi.python.org/pypi/pandas Code Repository: http://github.com/pydata/pandas Mailing List: http://groups.google.com/group/pydata From hatmatrix at gmail.com Sun Jul 22 18:14:12 2012 From: hatmatrix at gmail.com (hatmatrix) Date: Mon, 23 Jul 2012 00:14:12 +0200 Subject: [SciPy-User] programming solutions for constraint satisfaction problems Message-ID: Hi, I wonder if anyone has experience using any solvers from the following packages for constraint satisfaction problems: constraint: http://www.logilab.org/852 cspy: http://code.google.com/p/cspy-lib/ emma: http://www.eveutilities.com/products/emma My needs are not too demanding, but wonder if you had a recommendation, particularly regarding ease of use and scalability (speed). Also, if I have an underdetermined problem in which the number of constraints are small, will these solvers give me "a" solution or do I have to instead specify a heuristic (e.g., least-norm) and reformulate the problem specification? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From helmrp at yahoo.com Sun Jul 22 19:11:44 2012 From: helmrp at yahoo.com (The Helmbolds) Date: Sun, 22 Jul 2012 16:11:44 -0700 (PDT) Subject: [SciPy-User] Documentation References: <1342912142.98142.YahooMailNeo@web31802.mail.mud.yahoo.com> Message-ID: <1342998704.365.YahooMailNeo@web31801.mail.mud.yahoo.com> Ralf, ? Thanks for the response. At the moment I'm interested in editing/revising the docstrings and SciPy Guide sections for the optimize package, as I find them confusing and misleading for newcomers.?After getting some more detailed experience with the optimize package, I expect I'd like to try my hand at drafting some tutorials for it. I noticed that the link to "Cookbook"?just goes to a blank page, so?if given some guidance on what you envision for that area, I might try drafting up something for it. If I'm to have a go at the optimize package, it would be important to have an expert in optimization theory and practice back me up. I can handle the style and understand the basics well enough to make a good start, but it would be best for all concerned to have someone else checking the factual accuracy?of the text. For example, I'd expect the expert to rap my knuckles if I said something as inaccurate as "a vanishing gradient always indicates a minimum", and to remind me to consider stationary points and constraint functions. Moreover, I do not have access to the underlying C algorithms (I'm using Windows 7 operating system), nor to the publications referenced (I have no access to any college or university library). Hence, I may not be interpreting everything correctly. ? Expect me to be slow but persistent. Also expect me to be very sensitive to imprecise expressions. ? Some examples of this sensitivity are the following, from the "SciPy Reference Guide, Release 0.11.0.dev-659017f": ???? Re: Section 1.5.1 et seq: This section?refers to "multivariate scalar functions". I suppose that "multivariate real-valued functions" is really what is intended. And similarly for "scalar univariate functions". Physicists and mathematicians often consider complex numbers and other things as "scalars". ????Even though mentioned within?the Nelder-Mead section, I still think it's not the best practice to refer to the "simplex algorithm" without some qualification to remind the reader that Dantzig's algorithm is not the intended reference. ????Text refers to "interior derivatives", and I don't think that is widely understood terminology. Certainly I don't know if what is intended is any different from the ordinary partial derivatives. ????Docstrings refer to 'jac'. I haven't looked into this in great detail, but it seems likely that?this actually refers to the gradient rather than the Jacobian determinant. But could it be that this terminology is what is used in the underlying C routines? ? Re: Sections 1.12.1 and 4.7.2: These refer the reader to the "numpy manual", or to "the reference manual" for further details.?I cannot??find any such "manuals" mentioned in the SciPy document pages on the web. ? I know this sounds?critical, especially from someone new. I feel vulnerable to the counter-criticism "so where were you when the page was blank!?" However, believe me when I say that I have great sympathy for the folks who?wrote the?code and the documentation. Neither are easy, and to do both is extremely difficult! But perhaps I can help things along if I do my little bit. The Helmbolds 2645 E Southern Ave A241 Tempe AZ 85282 Email: helmrp at yahoo.com VOX: 480-831-3611 CELL Igor: 480-438-3918 CELL Alf: 602-568-6948 (but not often turned on) >________________________________ > From: Ralf Gommers >To: The Helmbolds ; SciPy Users List >Sent: Sunday, July 22, 2012 12:33 PM >Subject: Re: [SciPy-User] Documentation >? >On Sun, Jul 22, 2012 at 1:09 AM, The Helmbolds wrote: > >I'm interested in contributing to SciPy's documentation, but am?uncertain who my point of contact should be. >>Please enlighten me. > >That's great, welcome! As for a point of contact, usually the best idea is posting to this list, in case of any questions or issues you need feedback on or help with. > >Do you have an interest in a certain module, or preference for improving docstrings vs. contributing tutorial-style docs? Once we know that we could give you some more specific ideas of things that need work. > >As for the way to contribute, there are two for documentation: >- we have a wiki-style editor where you can edit all docs: http://docs.scipy.org/scipy/Milestones/. There's some info for getting started at http://docs.scipy.org/numpy/Front%20Page/. You'll need to ask for edit permissions on this list after registering a username there. >- you can send pull requests with changes on Github. You can find useful info related to that at https://github.com/scipy/scipy/blob/master/HACKING.rst.txt > >Cheers, >Ralf > >(Some stuff removed to avoid unneceesary duplication) -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sun Jul 22 19:38:34 2012 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 22 Jul 2012 18:38:34 -0500 Subject: [SciPy-User] Documentation In-Reply-To: <1342998704.365.YahooMailNeo@web31801.mail.mud.yahoo.com> References: <1342912142.98142.YahooMailNeo@web31802.mail.mud.yahoo.com> <1342998704.365.YahooMailNeo@web31801.mail.mud.yahoo.com> Message-ID: On Sun, Jul 22, 2012 at 6:11 PM, The Helmbolds wrote: > I noticed that the link to "Cookbook" just goes to a blank page, so if > given some guidance on what you envision for that area, I might try > drafting up something for it. > The "Cookbook" page might have been a little slow to appear. Try it again. > If I'm to have a go at the optimize package, it would be important to > have an expert in optimization theory and practice back me up. I can handle > the style and understand the basics well enough to make a good start, but > it would be best for all concerned to have someone else checking the > factual accuracy of the text. For example, I'd expect the expert to rap > my knuckles if I said something as inaccurate as "a vanishing gradient > always indicates a minimum", and to remind me to consider stationary points > and constraint functions. > Agreed! > Moreover, I do not have access to the underlying C algorithms (I'm using > Windows 7 operating system), nor to the publications referenced (I have no > access to any college or university library). Hence, I may not be > interpreting everything correctly. > > Expect me to be slow but persistent. Also expect me to be very sensitive > to imprecise expressions. > A critical editorial eye on the documentation is quite welcome. Thanks for offering to help. > Some examples of this sensitivity are the following, from the "SciPy > Reference Guide, Release 0.11.0.dev-659017f": > > Re: Section 1.5.1 et seq: This section refers to "multivariate scalar > functions". I suppose that "multivariate real-valued functions" is really > what is intended. And similarly for "scalar univariate functions". > Physicists and mathematicians often consider complex numbers and other > things as "scalars". > Even though mentioned within the Nelder-Mead section, I still think > it's not the best practice to refer to the "simplex algorithm" without some > qualification to remind the reader that Dantzig's algorithm is not the > intended reference. > Text refers to "interior derivatives", and I don't think that is > widely understood terminology. Certainly I don't know if what is intended > is any different from the ordinary partial derivatives. > Docstrings refer to 'jac'. I haven't looked into this in great detail, > but it seems likely that this actually refers to the gradient rather than > the Jacobian determinant. But could it be that this terminology is what is > used in the underlying C routines? > 'jac' refers to the Jacobian matrix ( http://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant). > > Re: Sections 1.12.1 and 4.7.2: These refer the reader to the "numpy > manual", or to "the reference manual" for further details. I cannot find > any such "manuals" mentioned in the SciPy document pages on the web. > > I know this sounds critical, especially from someone new. I feel > vulnerable to the counter-criticism "so where were you when the page was > blank!?" However, believe me when I say that I have great sympathy for the > folks who wrote the code and the documentation. Neither are easy, and to do > both is extremely difficult! But perhaps I can help things along if I do my > little bit. > > Any help is appreciated. Please dive in! Warren The Helmbolds > 2645 E Southern Ave A241 > Tempe AZ 85282 > Email: helmrp at yahoo.com > VOX: 480-831-3611 > CELL Igor: 480-438-3918 > CELL Alf: 602-568-6948 (but not often turned on) > > *From:* Ralf Gommers > *To:* The Helmbolds ; SciPy Users List < > scipy-user at scipy.org> > *Sent:* Sunday, July 22, 2012 12:33 PM > *Subject:* Re: [SciPy-User] Documentation > > On Sun, Jul 22, 2012 at 1:09 AM, The Helmbolds wrote: > > I'm interested in contributing to SciPy's documentation, but am uncertain > who my point of contact should be. > Please enlighten me. > > > That's great, welcome! As for a point of contact, usually the best idea is > posting to this list, in case of any questions or issues you need feedback > on or help with. > > Do you have an interest in a certain module, or preference for improving > docstrings vs. contributing tutorial-style docs? Once we know that we could > give you some more specific ideas of things that need work. > > As for the way to contribute, there are two for documentation: > - we have a wiki-style editor where you can edit all docs: > http://docs.scipy.org/scipy/Milestones/. There's some info for getting > started at http://docs.scipy.org/numpy/Front%20Page/. You'll need to ask > for edit permissions on this list after registering a username there. > - you can send pull requests with changes on Github. You can find useful > info related to that at > https://github.com/scipy/scipy/blob/master/HACKING.rst.txt > > Cheers, > Ralf > > (Some stuff removed to avoid unneceesary duplication) > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.sinclair.za at gmail.com Mon Jul 23 02:50:25 2012 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Mon, 23 Jul 2012 08:50:25 +0200 Subject: [SciPy-User] Spyder-Ubuntu 12.04 problem with SciPy 0.11.0rc1 In-Reply-To: <500BC5E3.2000406@it.uu.se> References: <500A902D.8090905@it.uu.se> <500BC5E3.2000406@it.uu.se> Message-ID: On 22 July 2012 11:20, Virgil Stokes wrote: > On 22-Jul-2012 09:14, Scott Sinclair wrote: > > On Jul 21, 2012 1:19 PM, "Virgil Stokes" wrote: >> >> I installed SciPy 0.11.0rc1 on an Ubuntu 12.04 (64-bit) platform with what >> seemed to be no problems. >> However, when I ran Spyder it still was using SciPy 0.10.1. I then tried >> to >> uninstall SciPy completely using >> >> >pip uninstall SciPy >> >> which again seemed to be ok. But, now I was unable to execute Spyder! I >> tried >> several times to uninstall and reinstall Spyder but I am stuck with no >> possibility to run Spyder. I get no error; but, clicking in this >> application in >> Ubuntu 12.04 seems to be ignored. > > I tried some more things; but, first here is what I get when I try to run > Spyder from a command line: > > root at virgil-laptop:/usr/local/bin# spyder > Traceback (most recent call last): > File "/usr/local/bin/spyder", line 2, in > from spyderlib import spyder > File "/usr/local/lib/python2.7/dist-packages/spyderlib/spyder.py", line > 91, in > from spyderlib.utils.environ import WinUserEnvDialog > File "/usr/local/lib/python2.7/dist-packages/spyderlib/utils/environ.py", > line 17, in > from spyderlib.widgets.dicteditor import DictEditor > File > "/usr/local/lib/python2.7/dist-packages/spyderlib/widgets/dicteditor.py", > line 31, in > from spyderlib.config import get_icon, get_font > File "/usr/local/lib/python2.7/dist-packages/spyderlib/config.py", line > 27, in > from spyderlib.utils import iofuncs, codeanalysis > File "/usr/local/lib/python2.7/dist-packages/spyderlib/utils/iofuncs.py", > line 33, in > import scipy.io as spio > File "/usr/local/lib/python2.7/dist-packages/scipy/io/__init__.py", line > 83, in > from matlab import loadmat, savemat, byteordercodes > File "/usr/local/lib/python2.7/dist-packages/scipy/io/matlab/__init__.py", > line 11, in > from mio import loadmat, savemat > File "/usr/local/lib/python2.7/dist-packages/scipy/io/matlab/mio.py", line > 15, in > from mio4 import MatFile4Reader, MatFile4Writer > File "/usr/local/lib/python2.7/dist-packages/scipy/io/matlab/mio4.py", > line 9, in > import scipy.sparse/usr/local/lib/python2.7/dist-packages/scipy > File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/__init__.py", > line 182, in > from csgraph import * > AttributeError: 'module' object has no attribute 'csgraph_to_masked' > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ It's not a good idea to be running as root. The recommended practice is to run terminal sessions as your regular user and use `sudo` if you need temporary privileges to access system directories etc. It looks like your scipy install in /usr/local/lib/python2.7/dist-packages/scipy/ was broken when you tried to install the release candidate. The minimum you'll need to do is remove the broken scipy install and reinstall it. I suggest the following (as your regular user): # Remove broken scipy install virgil at virgil-laptop:~$ sudo rm -rf /usr/local/lib/python2.7/dist-packages/scipy/ virgil at virgil-laptop:~$ sudo rm -rf /usr/local/lib/python2.7/dist-packages/scipy-*.egg* # Do a fresh install # Either virgil at virgil-laptop:~$ sudo pip install scipy # or avoid mucking in your system directories virgil at virgil-laptop:~$ pip install --user scipy # Install python-nose using your package manager or do virgil at virgil-laptop:~$ pip install --user nose # and test your scipy virgil at virgil-laptop:~$ python -c "import scipy; scipy.test()" If you're lucky the scipy test suite will complete without any errors and Spyder will work at this point. If you still have problems with scipy let us know, if your spyder install doesn't work you'll need to trouble-shoot it over on the spyder list (https://groups.google.com/forum/?fromgroups#!forum/spyderlib). Good luck! Cheers, Scott From vs at it.uu.se Mon Jul 23 03:57:53 2012 From: vs at it.uu.se (Virgil Stokes) Date: Mon, 23 Jul 2012 09:57:53 +0200 Subject: [SciPy-User] Spyder-Ubuntu 12.04 problem with SciPy 0.11.0rc1 In-Reply-To: References: <500A902D.8090905@it.uu.se> <500BC5E3.2000406@it.uu.se> Message-ID: <500D0401.6080901@it.uu.se> On 23-Jul-2012 08:50, Scott Sinclair wrote: > On 22 July 2012 11:20, Virgil Stokes wrote: >> On 22-Jul-2012 09:14, Scott Sinclair wrote: >> >> On Jul 21, 2012 1:19 PM, "Virgil Stokes" wrote: >>> I installed SciPy 0.11.0rc1 on an Ubuntu 12.04 (64-bit) platform with what >>> seemed to be no problems. >>> However, when I ran Spyder it still was using SciPy 0.10.1. I then tried >>> to >>> uninstall SciPy completely using >>> >>> >pip uninstall SciPy >>> >>> which again seemed to be ok. But, now I was unable to execute Spyder! I >>> tried >>> several times to uninstall and reinstall Spyder but I am stuck with no >>> possibility to run Spyder. I get no error; but, clicking in this >>> application in >>> Ubuntu 12.04 seems to be ignored. >> I tried some more things; but, first here is what I get when I try to run >> Spyder from a command line: >> >> root at virgil-laptop:/usr/local/bin# spyder >> Traceback (most recent call last): >> File "/usr/local/bin/spyder", line 2, in >> from spyderlib import spyder >> File "/usr/local/lib/python2.7/dist-packages/spyderlib/spyder.py", line >> 91, in >> from spyderlib.utils.environ import WinUserEnvDialog >> File "/usr/local/lib/python2.7/dist-packages/spyderlib/utils/environ.py", >> line 17, in >> from spyderlib.widgets.dicteditor import DictEditor >> File >> "/usr/local/lib/python2.7/dist-packages/spyderlib/widgets/dicteditor.py", >> line 31, in >> from spyderlib.config import get_icon, get_font >> File "/usr/local/lib/python2.7/dist-packages/spyderlib/config.py", line >> 27, in >> from spyderlib.utils import iofuncs, codeanalysis >> File "/usr/local/lib/python2.7/dist-packages/spyderlib/utils/iofuncs.py", >> line 33, in >> import scipy.io as spio >> File "/usr/local/lib/python2.7/dist-packages/scipy/io/__init__.py", line >> 83, in >> from matlab import loadmat, savemat, byteordercodes >> File "/usr/local/lib/python2.7/dist-packages/scipy/io/matlab/__init__.py", >> line 11, in >> from mio import loadmat, savemat >> File "/usr/local/lib/python2.7/dist-packages/scipy/io/matlab/mio.py", line >> 15, in >> from mio4 import MatFile4Reader, MatFile4Writer >> File "/usr/local/lib/python2.7/dist-packages/scipy/io/matlab/mio4.py", >> line 9, in >> import scipy.sparse/usr/local/lib/python2.7/dist-packages/scipy >> File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/__init__.py", >> line 182, in >> from csgraph import * >> AttributeError: 'module' object has no attribute 'csgraph_to_masked' >> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > It's not a good idea to be running as root. The recommended practice > is to run terminal sessions as your regular user and use `sudo` if you > need temporary privileges to access system directories etc. > > It looks like your scipy install in > /usr/local/lib/python2.7/dist-packages/scipy/ was broken when you > tried to install the release candidate. The minimum you'll need to do > is remove the broken scipy install and reinstall it. I suggest the > following (as your regular user): > > # Remove broken scipy install > virgil at virgil-laptop:~$ sudo rm -rf > /usr/local/lib/python2.7/dist-packages/scipy/ > virgil at virgil-laptop:~$ sudo rm -rf > /usr/local/lib/python2.7/dist-packages/scipy-*.egg* > > # Do a fresh install > # Either > virgil at virgil-laptop:~$ sudo pip install scipy > # or avoid mucking in your system directories > virgil at virgil-laptop:~$ pip install --user scipy > > # Install python-nose using your package manager or do > virgil at virgil-laptop:~$ pip install --user nose > > # and test your scipy > virgil at virgil-laptop:~$ python -c "import scipy; scipy.test()" > > If you're lucky the scipy test suite will complete without any errors > and Spyder will work at this point. If you still have problems with > scipy let us know, if your spyder install doesn't work you'll need to > trouble-shoot it over on the spyder list > (https://groups.google.com/forum/?fromgroups#!forum/spyderlib). > > Good luck! > > Cheers, > Scott > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Ok Scott, Your instructions were good --- I now have Spyder working with scipy 0.10.1. However, when I ran the scipy.test(), I received the following final outputs: Ran 5103 tests in in 47.160s OK (KNOWNFAIL=13, SKIP=24) Is the the expected results for a working scipy 0.10.1? Thanks very much for your help Scott, Have a good day! From jeremy at jeremysanders.net Mon Jul 23 04:20:50 2012 From: jeremy at jeremysanders.net (Jeremy Sanders) Date: Mon, 23 Jul 2012 09:20:50 +0100 Subject: [SciPy-User] ANN: Veusz 1.16 Message-ID: Veusz 1.16 ---------- http://home.gna.org/veusz/ Veusz is a Qt4 based scientific plotting package. It is written in Python, using PyQt4 for display and user-interfaces, and numpy for handling the numeric data. Veusz is designed to produce publication-ready Postscript/PDF/SVG output. The user interface aims to be simple, consistent and powerful. Veusz provides a GUI, command line, embedding and scripting interface (based on Python) to its plotting facilities. It also allows for manipulation and editing of datasets. Data can be captured from external sources such as Internet sockets or other programs. Changes in 1.16: * Experimental MathML support * Add upper/lower-left/right arrows * Add options to clip text/lines/shapes to graphs * Add stacked-area option to bar plot widget * Draw stacked bar plots top-down for better overlapping line behaviour * Axis labels can be placed at left or right of axes, in addition to centre * Line widget now has length-angle or point-to-point modes. Better support for only specifying some coordinates. * Exception dialog records more detailed traceback * Use top level windows for non-modal dialogs, giving minimize in window and no always-on-top behaviour * Zero length vectors and arrows in vector fields are not plotted * Add support for strings to be translated * Add "Sort" dataset plugin * Add "Histogram 2D" dataset plugin * Add "Divide by Maximum" and "Normalize" dataset plugins * Support for *args and **kwargs for custom functions * Custom colormaps can be defined in the custom editing dialog Bug fixes: * Use correct definition of 1pt = 1/72in * Workaround for splash screen problem * Fix numerous problems reported by pyflakes * Histograms fail when saved * Fix plot with nan functions * Fix failure of self tests on ARM platforms * Force pages/documents to have physical sizes * Fix crash if deleting multiple datasets in data edit dialog * Check dimensions of datasets in SetData * Handle better zero bytes in data files * Fix error if page size zero * Fix error if vector baselength is zero * If dataset plugin parameter not given in saved file, use default * Fix crash for axes with same minimum and maximum * Fix CSV import problem when same dataset has multiple types * Thinning markers works when using marker sizes / colors Features of package: * X-Y plots (with errorbars) * Line and function plots * Contour plots * Images (with colour mappings and colorbars) * Stepped plots (for histograms) * Bar graphs * Vector field plots * Box plots * Polar plots * Ternary plots * Plotting dates * Fitting functions to data * Stacked plots and arrays of plots * Plot keys * Plot labels * Shapes and arrows on plots * LaTeX-like formatting for text * EPS/PDF/PNG/SVG/EMF export * Scripting interface * Dataset creation/manipulation * Embed Veusz within other programs * Text, CSV, FITS, NPY/NPZ, QDP, binary and user-plugin importing * Data can be captured from external sources * User defined functions, constants and can import external Python functions * Plugin interface to allow user to write or load code to - import data using new formats - make new datasets, optionally linked to existing datasets - arbitrarily manipulate the document * Data picker * Interactive tutorial * Multithreaded rendering Requirements for source install: Python (2.4 or greater required) http://www.python.org/ Qt >= 4.4 (free edition) http://www.trolltech.com/products/qt/ PyQt >= 4.3 (SIP is required to be installed first) http://www.riverbankcomputing.co.uk/software/pyqt/ http://www.riverbankcomputing.co.uk/software/sip/ numpy >= 1.0 http://numpy.scipy.org/ Optional: PyFITS >= 1.1 (optional for FITS import) http://www.stsci.edu/resources/software_hardware/pyfits pyemf >= 2.0.0 (optional for EMF export) http://pyemf.sourceforge.net/ PyMinuit >= 1.1.2 (optional improved fitting) http://code.google.com/p/pyminuit/ For EMF and better SVG export, PyQt >= 4.6 or better is required, to fix a bug in the C++ wrapping dbus-python, for dbus interface http://dbus.freedesktop.org/doc/dbus-python/ For documentation on using Veusz, see the "Documents" directory. The manual is in PDF, HTML and text format (generated from docbook). The examples are also useful documentation. Please also see and contribute to the Veusz wiki: http://barmag.net/veusz-wiki/ Issues with the current version: * Due to a bug in the Qt XML processing, some MathML elements containing purely white space (e.g. thin space) will give an error. If you enjoy using Veusz, we would love to hear from you. Please join the mailing lists at https://gna.org/mail/?group=veusz to discuss new features or if you'd like to contribute code. The latest code can always be found in the Git repository at https://github.com/jeremysanders/veusz.git. From lanceboyle at qwest.net Mon Jul 23 04:30:17 2012 From: lanceboyle at qwest.net (Jerry) Date: Mon, 23 Jul 2012 01:30:17 -0700 Subject: [SciPy-User] ANN: Veusz 1.16 In-Reply-To: References: Message-ID: On Jul 23, 2012, at 1:20 AM, Jeremy Sanders wrote: > Veusz 1.16 > ---------- > http://home.gna.org/veusz/ > > Veusz is a Qt4 based scientific plotting package. It is written in > Python, using PyQt4 for display and user-interfaces, and numpy for > handling the numeric data. Veusz is designed to produce > publication-ready Postscript/PDF/SVG output. The user interface aims > to be simple, consistent and powerful. > > Veusz provides a GUI, command line, embedding and scripting interface > (based on Python) to its plotting facilities. It also allows for > manipulation and editing of datasets. Data can be captured from > external sources such as Internet sockets or other programs. > Veusz is an unusually good spin on interactive 2D plotters. It handles the seemingly endless clicking-around that is endemic to getting and managing decent plots better than any other plotter that I've tried, with the _possible_ exception of Igor Pro (a 500+ USD programming package). I hope to write a better review some day but for now, I'll just say, thanks, Jeremy! Jerry From scott.sinclair.za at gmail.com Mon Jul 23 04:52:21 2012 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Mon, 23 Jul 2012 10:52:21 +0200 Subject: [SciPy-User] Spyder-Ubuntu 12.04 problem with SciPy 0.11.0rc1 In-Reply-To: <500D0401.6080901@it.uu.se> References: <500A902D.8090905@it.uu.se> <500BC5E3.2000406@it.uu.se> <500D0401.6080901@it.uu.se> Message-ID: On 23 July 2012 09:57, Virgil Stokes wrote: > Your instructions were good --- I now have Spyder working with scipy 0.10.1. Glad to hear that. > However, when I ran the scipy.test(), I received the following final outputs: > > Ran 5103 tests in in 47.160s > > OK (KNOWNFAIL=13, SKIP=24) > > Is the the expected results for a working scipy 0.10.1? Yes. The 'OK' informs you that the tests were completed successfully. Don't worry about the KNOWNFAILS. Cheers, Scott From servant.mathieu at gmail.com Mon Jul 23 05:53:47 2012 From: servant.mathieu at gmail.com (servant mathieu) Date: Mon, 23 Jul 2012 11:53:47 +0200 Subject: [SciPy-User] efficiency of the simplex routine: R (optim) vs scipy.optimize.fmin In-Reply-To: References: Message-ID: Hi Denis, Thanks for your response. For the fmin function in scipy, I took the default ftol and stol values. I'm just trying to minize a chi square between observed experimental data and simulated data. I've done this in python and R with the Nelder-Mead algorithm, with exactly the same starting values. While the solutions produced by R and python are not very different, R systematicaly produces a lower chi-square after the same amount of iterations. This may be related to ftol and stol, but I don't know which value I should give to these parameters.... Cheers, Mat 2012/7/20 denis > Hi Mathieu, > (months later) two differences among implementations of Nelder-Mead: > 1) the start simplex: x0 +- what ? It's common to take x0 + a fixed > (user-specified) stepsize in each dimension. NLOpt takes a "walking > simplex", don't know what R does > > 2) termination: what ftol, xtol did you specify ? NLOpt looks at > fhi - flo: fhi changes at each iteration, flo is sticky. > > Could you post a testcase similar to yours ? > That would sure be helpful. > > cheers > -- denis > > > On 24/05/2012 10:15, servant mathieu wrote: > > Dear scipy users, > > Again a question about optimization. > > I've just compared the efficiency of the simplex routine in R > > (optim) vs scipy (fmin), when minimizing a chi-square. fmin is faster > > than optim, but appears to be less efficient. In R, the value of the > > function is always minimized step by step (there are of course some > > exceptions) while there is lot of fluctuations in python. Given that the > > underlying simplex algorithm is supposed to be the same, which mechanism > > is responsible for this difference? Is it possible to constrain fmin so > > it could be more rigorous? > > Cheers, > > Mathieu > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From helmrp at yahoo.com Mon Jul 23 08:17:21 2012 From: helmrp at yahoo.com (The Helmbolds) Date: Mon, 23 Jul 2012 05:17:21 -0700 (PDT) Subject: [SciPy-User] Gradient inputs to SciPy optimize routines In-Reply-To: References: <1342912142.98142.YahooMailNeo@web31802.mail.mud.yahoo.com> <1342998704.365.YahooMailNeo@web31801.mail.mud.yahoo.com> Message-ID: <1343045841.16147.YahooMailNeo@web31809.mail.mud.yahoo.com> Oh, so that's what's going on!! ? All the optimize routines assume that the gradients are supplied in the form of?one-dimensional (1D or one-axis) numpy arrays, and that?the Hessians are supplied in the form of two-axis (2D or two-dimensional) numpy arrays. [Has terminology settled on always using 'nD' in place of 'n-dimensional' and 'n-axis'? 'nD' is shorter.] ? But to find that out I had to examine the code for `rosen_der`, and discover that its return value was a numpy array. ? Now, how far is this generalizable? Is it true that _all_ native SciPy routines require numpy arrays as inputs? Is it true that _none_ of the native SciPy routines _ever_ take ordinary Python sequences as input? Is it true that _all_ native SciPy routines return numpy arrays and _never_ an ordinary Python sequence? ? If those broad?generalizations are true, then perhaps it should be stated (even emphasized) in the documentation. If not true, then?maybe it would be a good idea to flag this as a possible "gotcha" -- one that might signal its presence by an error message of the form "bad operation for : '...', where '...' is some ordinary Python sequence such as a list or tuple. ? Also, if the broad generalization is not true, then maybe SciPy routines could themselves check the input type and do something appropriate, such as: ????1. Tacitly convert whatever they receive to the type they want to use in the body of the routine, or ????2.?Provide the user with an error message stating that the input type is wrong and what input type is ????required? Bob The Helmbolds 2645 E Southern Ave A241 Tempe AZ 85282 Email: helmrp at yahoo.com VOX: 480-831-3611 CELL Igor: 480-438-3918 CELL Alf: 602-568-6948 (but not often turned on) >________________________________ > From: Warren Weckesser >To: The Helmbolds ; SciPy Users List >Cc: Ralf Gommers >Sent: Sunday, July 22, 2012 4:38 PM >Subject: Re: [SciPy-User] Documentation > >On Sun, Jul 22, 2012 at 6:11 PM, The Helmbolds wrote: > > > >? >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Mon Jul 23 10:13:49 2012 From: sturla at molden.no (Sturla Molden) Date: Mon, 23 Jul 2012 16:13:49 +0200 Subject: [SciPy-User] Documentation In-Reply-To: <1342998704.365.YahooMailNeo@web31801.mail.mud.yahoo.com> References: <1342912142.98142.YahooMailNeo@web31802.mail.mud.yahoo.com> <1342998704.365.YahooMailNeo@web31801.mail.mud.yahoo.com> Message-ID: <094927AD-3C3E-48E6-94B9-4BA670B69FB5@molden.no> Den 23. juli 2012 kl. 01:11 skrev The Helmbolds : > Moreover, I do not have access to the underlying C algorithms (I'm using Windows 7 operating system), nor to the publications referenced (I have no access to any college or university library). Huh? What has Windows 7 got to do with that? I'm using Windows 7 too, and all the SciPy sources are on public display on github. > Hence, I may not be interpreting everything correctly. In which case you shouldn't temper with the documentation. Sturla -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Mon Jul 23 10:36:56 2012 From: sturla at molden.no (Sturla Molden) Date: Mon, 23 Jul 2012 16:36:56 +0200 Subject: [SciPy-User] Gradient inputs to SciPy optimize routines In-Reply-To: <1343045841.16147.YahooMailNeo@web31809.mail.mud.yahoo.com> References: <1342912142.98142.YahooMailNeo@web31802.mail.mud.yahoo.com> <1342998704.365.YahooMailNeo@web31801.mail.mud.yahoo.com> <1343045841.16147.YahooMailNeo@web31809.mail.mud.yahoo.com> Message-ID: <318DE191-84A9-48FA-9382-DBA4A0953FB8@molden.no> Den 23. juli 2012 kl. 14:17 skrev The Helmbolds : > Now, how far is this generalizable? Is it true that _all_ native SciPy routines require numpy arrays as inputs? Is it true that _none_ of the native SciPy routines _ever_ take ordinary Python sequences as input? Is it true that _all_ native SciPy routines return numpy arrays and _never_ an ordinary Python sequence? If the documentation says an argument is "array like" it means it can be "any Python sequence that can be coverted to a NumPy array." Internally, native SciPy routines often work with NumPy arrays represented as C pointers or Fortran arrays. They will do the conversion from Python sequences to NumPy if the documentation says so. The NumPy functions np.ascontiguousarray and np.asfortranarray are often used internally in SciPy for this purpose; that is, they will only make a copy if the argument is not a NumPy ndarray that can be passed safely to C or Fortran. Any Python sequence which is not a C or Fortran contiguous NumPy array will be converted in that process. Sturla -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Mon Jul 23 10:39:42 2012 From: sturla at molden.no (Sturla Molden) Date: Mon, 23 Jul 2012 16:39:42 +0200 Subject: [SciPy-User] Gradient inputs to SciPy optimize routines In-Reply-To: <318DE191-84A9-48FA-9382-DBA4A0953FB8@molden.no> References: <1342912142.98142.YahooMailNeo@web31802.mail.mud.yahoo.com> <1342998704.365.YahooMailNeo@web31801.mail.mud.yahoo.com> <1343045841.16147.YahooMailNeo@web31809.mail.mud.yahoo.com> <318DE191-84A9-48FA-9382-DBA4A0953FB8@molden.no> Message-ID: Den 23. juli 2012 kl. 16:36 skrev Sturla Molden : > > > Den 23. juli 2012 kl. 14:17 skrev The Helmbolds : > > >> Now, how far is this generalizable? Is it true that _all_ native SciPy routines require numpy arrays as inputs? Is it true that _none_ of the native SciPy routines _ever_ take ordinary Python sequences as input? Is it true that _all_ native SciPy routines return numpy arrays and _never_ an ordinary Python sequence? > > If the documentation says an argument is "array like" it means it can be "any Python sequence that can be coverted to a NumPy array." Or converted ;) S.M. -------------- next part -------------- An HTML attachment was scrubbed... URL: From helmrp at yahoo.com Mon Jul 23 11:00:09 2012 From: helmrp at yahoo.com (Robaula) Date: Mon, 23 Jul 2012 08:00:09 -0700 Subject: [SciPy-User] Gradient inputs to SciPy optimize routines In-Reply-To: References: <1342912142.98142.YahooMailNeo@web31802.mail.mud.yahoo.com> <1342998704.365.YahooMailNeo@web31801.mail.mud.yahoo.com> <1343045841.16147.YahooMailNeo@web31809.mail.mud.yahoo.com> <318DE191-84A9-48FA-9382-DBA4A0953FB8@molden.no> Message-ID: <2CAEC683-A8D0-425D-AC06-01A8ED5147D4@yahoo.com> Ok. But then, strictly speaking, the gradient inputs to the optimize functions must not return 'array like' values, even though -- in fact, precisely because -- they have to return NumPy arrays. To me, that seems kinda wacky. Sent from Robaula's iPad helmrp at yahoo.com 2645 E Southern, A241 Tempe, AZ 85282 VOX: 480-831-3611 CELL: 602-568-6948 (but not often turned on) On Jul 23, 2012, at 7:39 AM, Sturla Molden wrote: > > > Den 23. juli 2012 kl. 16:36 skrev Sturla Molden : > >> >> >> Den 23. juli 2012 kl. 14:17 skrev The Helmbolds : >> >> >>> Now, how far is this generalizable? Is it true that _all_ native SciPy routines require numpy arrays as inputs? Is it true that _none_ of the native SciPy routines _ever_ take ordinary Python sequences as input? Is it true that _all_ native SciPy routines return numpy arrays and _never_ an ordinary Python sequence? >> >> If the documentation says an argument is "array like" it means it can be "any Python sequence that can be coverted to a NumPy array." > > > Or converted ;) > > > S.M. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From helmrp at yahoo.com Mon Jul 23 11:12:05 2012 From: helmrp at yahoo.com (The Helmbolds) Date: Mon, 23 Jul 2012 08:12:05 -0700 (PDT) Subject: [SciPy-User] Gradient inputs to SciPy optimize routines In-Reply-To: <318DE191-84A9-48FA-9382-DBA4A0953FB8@molden.no> References: <1342912142.98142.YahooMailNeo@web31802.mail.mud.yahoo.com> <1342998704.365.YahooMailNeo@web31801.mail.mud.yahoo.com> <1343045841.16147.YahooMailNeo@web31809.mail.mud.yahoo.com> <318DE191-84A9-48FA-9382-DBA4A0953FB8@molden.no> Message-ID: <1343056325.59465.YahooMailNeo@web31812.mail.mud.yahoo.com> Thanks. That's good to know. So if all the optimize routines simply inserted ???? fprime = np.ascontinuaousarray(fprime) before calling _minimize_(whatever), the user's `fprime` function could return any "array like" object and all would be well? The Helmbolds 2645 E Southern Ave A241 Tempe AZ 85282 Email: helmrp at yahoo.com VOX: 480-831-3611 CELL Igor: 480-438-3918 CELL Alf: 602-568-6948 (but not often turned on) >________________________________ > From: Sturla Molden >To: The Helmbolds ; SciPy Users List >Cc: Warren Weckesser ; SciPy Users List >Sent: Monday, July 23, 2012 7:36 AM >Subject: Re: [SciPy-User] Gradient inputs to SciPy optimize routines > > >The NumPy functions np.ascontiguousarray and np.asfortranarray are often used internally in SciPy for this purpose; that is, they will only make a copy if the argument is not a NumPy ndarray that can be passed safely to C or Fortran. Any Python sequence which is not a C or Fortran contiguous NumPy array will be converted in that process. > >Sturla -------------- next part -------------- An HTML attachment was scrubbed... URL: From vanforeest at gmail.com Mon Jul 23 13:43:50 2012 From: vanforeest at gmail.com (nicky van foreest) Date: Mon, 23 Jul 2012 19:43:50 +0200 Subject: [SciPy-User] programming solutions for constraint satisfaction problems In-Reply-To: References: Message-ID: Hi, Thanks for the list. I don't have any experience with any of these packages, though. Nicky On 23 July 2012 00:14, hatmatrix wrote: > Hi, I wonder if anyone has experience using any solvers from the following > packages for constraint satisfaction problems: > > constraint: http://www.logilab.org/852 > cspy: http://code.google.com/p/cspy-lib/ > emma: http://www.eveutilities.com/products/emma > > My needs are not too demanding, but wonder if you had a recommendation, > particularly regarding ease of use and scalability (speed). > > Also, if I have an underdetermined problem in which the number of > constraints are small, will these solvers give me "a" solution or do I have > to instead specify a heuristic (e.g., least-norm) and reformulate the > problem specification? > > Thanks. > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From DParker at chromalloy.com Mon Jul 23 15:26:44 2012 From: DParker at chromalloy.com (DParker at chromalloy.com) Date: Mon, 23 Jul 2012 15:26:44 -0400 Subject: [SciPy-User] Fwd: KDTree IndexError In-Reply-To: References: Message-ID: Increasing the recursion limit to 10000 worked for all of the data I am currently working with. I agree that it would be helpful to note the possibility of this error in the documentation. David G. Parker From: Oleksandr Huziy To: SciPy Users List Date: 07/21/2012 04:34 PM Subject: Re: [SciPy-User] Fwd: KDTree IndexError Sent by: scipy-user-bounces at scipy.org Hi I've tested this solution on iMac (python2.6) and Ubuntu (python2.7), and it worked. On Ubuntu I got the following message for the initial script: RuntimeError: maximum recursion depth exceeded while calling a Python object which has lead me to the solution. It would be helpful to have the warning in the docstring just for the cases when the RuntimeError is not raised. Maybe just wait the DParker's confirmation that the solution works for all of his data. I also tried changing the leafsize parameter but that did not seem to have any effect. Cheers -- Oleksandr Huziy 2012/7/21 Ralf Gommers On Sat, Jul 21, 2012 at 12:19 AM, Oleksandr Huziy wrote: Hi, the kdtree implementation uses recursion, and the standard limit of the recursion depth in python is 1000. For some of your data you need more than 1000 (for the bad data). When I put import sys sys.setrecursionlimit(10000) your bad data script works OK. Probably it would work with less... Should we put that in the Notes section of the KDTree docstring? Ralf _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Mon Jul 23 21:21:31 2012 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Mon, 23 Jul 2012 18:21:31 -0700 (PDT) Subject: [SciPy-User] Gradient inputs to SciPy optimize routines In-Reply-To: <1343056325.59465.YahooMailNeo@web31812.mail.mud.yahoo.com> References: <1342912142.98142.YahooMailNeo@web31802.mail.mud.yahoo.com> <1342998704.365.YahooMailNeo@web31801.mail.mud.yahoo.com> <1343045841.16147.YahooMailNeo@web31809.mail.mud.yahoo.com> <318DE191-84A9-48FA-9382-DBA4A0953FB8@molden.no> <1343056325.59465.YahooMailNeo@web31812.mail.mud.yahoo.com> Message-ID: <1343092891.82885.YahooMailNeo@web113420.mail.gq1.yahoo.com> I would speculate that the reason the optimise routines don't accept arbitrary sequence types for the Jacobian is one of performance. The gradient functions get called many times during the optimisation and pythons sequence types are slow enough that converting the arbitrary sequence into an array would probably be slower than computing the Jacobian using finite differences within the algorithm. Although calling np.ascontiguous has relatively low overhead if the data is already an array, it is another function call (expensive in python) and could be up to ~30% of total running time for a simple model function. The same applies to too many checks on data type. Within the optimising loop, they will be costly. I like to think of all scipy functions as being designed to work on numpy arrays, but some of them will gracefully do the conversion for you if you give them the wrong data type. cheers, David ________________________________ From: The Helmbolds To: Sturla Molden ; SciPy Users List Sent: Tuesday, 24 July 2012 3:12 AM Subject: Re: [SciPy-User] Gradient inputs to SciPy optimize routines Thanks. That's good to know. So if all the optimize routines simply inserted ???? fprime = np.ascontinuaousarray(fprime) before calling _minimize_(whatever), the user's `fprime` function could return any "array like" object and all would be well? ? The Helmbolds 2645 E Southern Ave A241 Tempe AZ 85282 Email: helmrp at yahoo.com VOX: 480-831-3611 CELL Igor: 480-438-3918 CELL Alf: 602-568-6948 (but not often turned on) From: Sturla Molden >To: The Helmbolds ; SciPy Users List >Cc: Warren Weckesser ; SciPy Users List >Sent: Monday, July 23, 2012 7:36 AM >Subject: Re: [SciPy-User] Gradient inputs to SciPy optimize routines > > >The NumPy functions np.ascontiguousarray and np.asfortranarray are often used internally in SciPy for this purpose; that is, they will only make a copy if the argument is not a NumPy ndarray that can be passed safely to C or Fortran. Any Python sequence which is not a C or Fortran contiguous NumPy array will be converted in that process. >? >Sturla _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Tue Jul 24 02:11:40 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 24 Jul 2012 08:11:40 +0200 Subject: [SciPy-User] Documentation In-Reply-To: <094927AD-3C3E-48E6-94B9-4BA670B69FB5@molden.no> References: <1342912142.98142.YahooMailNeo@web31802.mail.mud.yahoo.com> <1342998704.365.YahooMailNeo@web31801.mail.mud.yahoo.com> <094927AD-3C3E-48E6-94B9-4BA670B69FB5@molden.no> Message-ID: On Mon, Jul 23, 2012 at 4:13 PM, Sturla Molden wrote: > > > Den 23. juli 2012 kl. 01:11 skrev The Helmbolds : > > > Moreover, I do not have access to the underlying C algorithms (I'm using > Windows 7 operating system), nor to the publications referenced (I have no > access to any college or university library). > > > Huh? What has Windows 7 got to do with that? > > I'm using Windows 7 too, and all the SciPy sources are on public display > on github. > > > > Hence, I may not be interpreting everything correctly. > > > In which case you shouldn't temper with the documentation. > Eh, that's why we review any changes before applying them. The scipy docs are incomplete and/or not easy to understand in a number of places, so attempts at improving them are very welcome. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at hilboll.de Tue Jul 24 07:48:34 2012 From: lists at hilboll.de (Andreas Hilboll) Date: Tue, 24 Jul 2012 13:48:34 +0200 Subject: [SciPy-User] Multiprocessing in optimize.leastsq? Message-ID: <3b9bc57b9ea824edbe76d9da19c32f96.squirrel@srv2.s4y.tournesol-consulting.eu> I wrote a script which calls optimize.leastsq in a for loop. The script is running on a machine with 24 CPU cores (Gentoo Linux 64bit). Now in htop, I can see one python process which in turn has 24 child processes, all of them running at between 0 and 100% CPU. I don't quite understand the origin of the 24 child processes. Is leastsq using some OpenMP parallelization automatically? If so: How can I influence how many child processes will be started? Thanks for your insight, Andreas. From robert.kern at gmail.com Tue Jul 24 08:38:55 2012 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 24 Jul 2012 13:38:55 +0100 Subject: [SciPy-User] Multiprocessing in optimize.leastsq? In-Reply-To: <3b9bc57b9ea824edbe76d9da19c32f96.squirrel@srv2.s4y.tournesol-consulting.eu> References: <3b9bc57b9ea824edbe76d9da19c32f96.squirrel@srv2.s4y.tournesol-consulting.eu> Message-ID: On Tue, Jul 24, 2012 at 12:48 PM, Andreas Hilboll wrote: > I wrote a script which calls optimize.leastsq in a for loop. The script is > running on a machine with 24 CPU cores (Gentoo Linux 64bit). Now in htop, > I can see one python process which in turn has 24 child processes, all of > them running at between 0 and 100% CPU. > > I don't quite understand the origin of the 24 child processes. Is leastsq > using some OpenMP parallelization automatically? If so: How can I > influence how many child processes will be started? You probably linked in a multithreaded ATLAS for scipy.linalg, which scipy.optimize.leastsq() uses. To my knowledge, the number of threads it uses is not adjustable at runtime. http://math-atlas.sourceforge.net/faq.html#tnum -- Robert Kern From L.Ulferts at hs-osnabrueck.de Tue Jul 24 08:40:13 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 24 Jul 2012 12:40:13 -0000 Subject: [SciPy-User] Multiprocessing in optimize.leastsq? Message-ID: <20120724124013.12430.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From njs at pobox.com Tue Jul 24 10:00:54 2012 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 24 Jul 2012 15:00:54 +0100 Subject: [SciPy-User] Gradient inputs to SciPy optimize routines In-Reply-To: <1343092891.82885.YahooMailNeo@web113420.mail.gq1.yahoo.com> References: <1342912142.98142.YahooMailNeo@web31802.mail.mud.yahoo.com> <1342998704.365.YahooMailNeo@web31801.mail.mud.yahoo.com> <1343045841.16147.YahooMailNeo@web31809.mail.mud.yahoo.com> <318DE191-84A9-48FA-9382-DBA4A0953FB8@molden.no> <1343056325.59465.YahooMailNeo@web31812.mail.mud.yahoo.com> <1343092891.82885.YahooMailNeo@web113420.mail.gq1.yahoo.com> Message-ID: On Tue, Jul 24, 2012 at 2:21 AM, David Baddeley wrote: > I would speculate that the reason the optimise routines don't accept > arbitrary sequence types for the Jacobian is one of performance. The > gradient functions get called many times during the optimisation and pythons > sequence types are slow enough that converting the arbitrary sequence into > an array would probably be slower than computing the Jacobian using finite > differences within the algorithm. Although calling np.ascontiguous has > relatively low overhead if the data is already an array, it is another > function call (expensive in python) and could be up to ~30% of total running > time for a simple model function. The same applies to too many checks on > data type. Within the optimising loop, they will be costly. The checks have to be done anyway (otherwise returning a non-array would just cause a segfault), so there wouldn't be any slowdown for accepting generic array_like's instead of just ndarrays here. I think it's just a bug; someone should file a bug report/write a PR. -n From L.Ulferts at hs-osnabrueck.de Tue Jul 24 10:01:25 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 24 Jul 2012 14:01:25 -0000 Subject: [SciPy-User] Gradient inputs to SciPy optimize routines Message-ID: <20120724140125.11281.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From elcortogm at googlemail.com Tue Jul 24 12:27:01 2012 From: elcortogm at googlemail.com (Steve Schmerler) Date: Tue, 24 Jul 2012 18:27:01 +0200 Subject: [SciPy-User] mkl_lapack32 ? In-Reply-To: <500557A3.8030300@univ-paris13.fr> References: <500557A3.8030300@univ-paris13.fr> Message-ID: <20120724162701.GE30678@cartman.physik.tu-freiberg.de> On Jul 17 14:16 +0200, Nicolas Greneche wrote: > [mkl] > library_dirs = /opt/intel/composerxe-2011.0.084/mkl/lib/intel64 > include_dirs = /opt/intel/composerxe-2011.0.084/mkl/include > mkl_libs = mkl_intel_lp64,mkl_intel_thread,mkl_core > > cc_exe = 'icc -O2 -g -openmp -avx' [...] > error: Command "icc -m64 -fPIC -shared > build/temp.linux-x86_64-2.6/build/src.linux-x86_64-2.6/build/src.linux-x86_64-2.6/scipy/lib/lapack/flapackmodule.o > build/temp.linux-x86_64-2.6/build/src.linux-x86_64-2.6/fortranobject.o > -L/opt/intel/composerxe-2011.0.084/mkl/lib/intel64 -L/usr/lib64 > -Lbuild/temp.linux-x86_64-2.6 -lmkl_lapack32 -lmkl_lapack64 > -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lpthread -lpython2.6 -o > build/lib.linux-x86_64-2.6/scipy/lib/lapack/flapack.so" failed with exit > status 1 [...] Linking MKL is always fun. See [1]. Note that I used lapack_libs and mkl_libs in site.cfg, in contrast to what is mentioned in [2,3]. I did a serial build, but you just need to replace mkl_sequential -> mkl_intel_thread. Your icc doesn't seem to use "-O2 -g -openmp -avx", but the distutils default "-m64 -fPIC -shared". [1] http://thread.gmane.org/gmane.comp.python.scientific.user/32036/focus=32037 [2] http://software.intel.com/en-us/articles/numpy-scipy-with-mkl/ [3] http://www.scipy.org/Installing_SciPy/Linux#head-0b5ce001569a20ddbbdb2187578000372a09acb1 best, Steve From L.Ulferts at hs-osnabrueck.de Tue Jul 24 12:27:27 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 24 Jul 2012 16:27:27 -0000 Subject: [SciPy-User] mkl_lapack32 ? Message-ID: <20120724162727.11823.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From matthieu.brucher at gmail.com Tue Jul 24 12:55:34 2012 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 24 Jul 2012 18:55:34 +0200 Subject: [SciPy-User] mkl_lapack32 ? In-Reply-To: <500557A3.8030300@univ-paris13.fr> References: <500557A3.8030300@univ-paris13.fr> Message-ID: Hi, I think the latest Intel Composer versions have a builtin flag for using the mkl (-mkl). Perhaps if you use a mock .cfg, it could work? Matthieu 2012/7/17 Nicolas Greneche > Hi, > > I have some trouble to build scipy on my computer. > > Here is my site.cfg (at top level of scipy directory) : > > [mkl] > library_dirs = /opt/intel/composerxe-2011.0.084/mkl/lib/intel64 > include_dirs = /opt/intel/composerxe-2011.0.084/mkl/include > mkl_libs = mkl_intel_lp64,mkl_intel_thread,mkl_core > > cc_exe = 'icc -O2 -g -openmp -avx' > > I run config / build / install : > > python setup.py config --compiler=intelem --fcompiler=intelem build_clib > --compiler=intelem --fcompiler=intelem build_ext --compiler=intelem > --fcompiler=intelem install --prefix=/usr/local > > icc: build/src.linux-x86_64-2.6/fortranobject.c > creating build/lib.linux-x86_64-2.6/scipy/lib/lapack > icc -m64 -fPIC -shared > > build/temp.linux-x86_64-2.6/build/src.linux-x86_64-2.6/build/src.linux-x86_64-2.6/scipy/lib/lapack/flapackmodule.o > build/temp.linux-x86_64-2.6/build/src.linux-x86_64-2.6/fortranobject.o > -L/opt/intel/composerxe-2011.0.084/mkl/lib/intel64 -L/usr/lib64 > -Lbuild/temp.linux-x86_64-2.6 -lmkl_lapack32 -lmkl_lapack64 > -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lpthread -lpython2.6 -o > build/lib.linux-x86_64-2.6/scipy/lib/lapack/flapack.so > ld: cannot find -lmkl_lapack32 > ld: cannot find -lmkl_lapack32 > error: Command "icc -m64 -fPIC -shared > > build/temp.linux-x86_64-2.6/build/src.linux-x86_64-2.6/build/src.linux-x86_64-2.6/scipy/lib/lapack/flapackmodule.o > build/temp.linux-x86_64-2.6/build/src.linux-x86_64-2.6/fortranobject.o > -L/opt/intel/composerxe-2011.0.084/mkl/lib/intel64 -L/usr/lib64 > -Lbuild/temp.linux-x86_64-2.6 -lmkl_lapack32 -lmkl_lapack64 > -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lpthread -lpython2.6 -o > build/lib.linux-x86_64-2.6/scipy/lib/lapack/flapack.so" failed with exit > status 1 > > I don't understand why it attempts to link with mkl_lapack32 and > mkl_lapack64 whereas it runs on a 64 bits system. > > Anyone can help ? > > Regards, > > -- > Nicolas Gren?che > > Centre de Ressources Informatiques > Universit? Paris NORD / UP13 > 99, avenue Jean-Baptiste Cl?ment > 93430 Villetaneuse > > Tel : 01 49 40 40 35 > Fax : 01 48 22 81 50 > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher Music band: http://liliejay.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.Ulferts at hs-osnabrueck.de Tue Jul 24 12:56:00 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 24 Jul 2012 16:56:00 -0000 Subject: [SciPy-User] mkl_lapack32 ? Message-ID: <20120724165600.30802.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From alan.isaac at gmail.com Tue Jul 24 15:05:13 2012 From: alan.isaac at gmail.com (Alan G Isaac) Date: Tue, 24 Jul 2012 15:05:13 -0400 Subject: [SciPy-User] nan handling by scipy.optimize Message-ID: <500EF1E9.3090402@gmail.com> Has there been any recent change in nan handling in scipy.optimize? I'm using a very recent EPD distribution, and a student is using a release with Python 2.6.5. I'm getting a zero returned by brentq and bisect when (iiuc) the function evaluation produces a nan, while he is not. The search interval is bounded away from zero. Thanks for any insight, Alan Isaac From L.Ulferts at hs-osnabrueck.de Tue Jul 24 15:05:35 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 24 Jul 2012 19:05:35 -0000 Subject: [SciPy-User] nan handling by scipy.optimize Message-ID: <20120724190535.18923.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From bnuttall at uky.edu Wed Jul 25 14:47:15 2012 From: bnuttall at uky.edu (Nuttall, Brandon C) Date: Wed, 25 Jul 2012 18:47:15 +0000 Subject: [SciPy-User] Classification using neural networks Message-ID: <77F5D06112589B4BB08A7C881381AB7604D338CD@ex10mb06.ad.uky.edu> Folks, I am looking for a method to do rock type identification using geophysical log data acquired from oil and gas wells. The geophysical log data are continuous recordings of a variety of bulk rock properties that change with the type and abundance of mineral constituents. What I want to do is classify the rock types (limestone, sandstone, shale, dolomite, etc) and their relative percentages. I remember reading an earlier SCIPY post about a neural network module that used training datasets to establish a classification scheme and then was run against unknown data. But, I can't find the post and don't remember the module name. Can anyone point me in the right direction? Thanks. Brandon Nuttall, KRPG-1364 Kentucky Geological Survey www.uky.edu/kgs bnuttall at uky.edu (KGS, Mo-We) Brandon.nuttall at ky.gov (EEC,Th-Fr) 859-323-0544 859-684-7473 (cell) -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.Ulferts at hs-osnabrueck.de Wed Jul 25 14:47:34 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 25 Jul 2012 18:47:34 -0000 Subject: [SciPy-User] Classification using neural networks Message-ID: <20120725184734.19171.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From jniehof at lanl.gov Wed Jul 25 14:54:19 2012 From: jniehof at lanl.gov (Jonathan T. Niehof) Date: Wed, 25 Jul 2012 12:54:19 -0600 Subject: [SciPy-User] Classification using neural networks In-Reply-To: <77F5D06112589B4BB08A7C881381AB7604D338CD@ex10mb06.ad.uky.edu> References: <77F5D06112589B4BB08A7C881381AB7604D338CD@ex10mb06.ad.uky.edu> Message-ID: <501040DB.4020102@lanl.gov> On 07/25/2012 12:47 PM, Nuttall, Brandon C wrote: > I am looking for a method to do rock type identification using > geophysical log data acquired from oil and gas wells. The geophysical > log data are continuous recordings of a variety of bulk rock properties > that change with the type and abundance of mineral constituents. What I > want to do is classify the rock types (limestone, sandstone, shale, > dolomite, etc) and their relative percentages. I remember reading an > earlier SCIPY post about a neural network module that used training > datasets to establish a classification scheme and then was run against > unknown data. But, I can?t find the post and don?t remember the module > name. Can anyone point me in the right direction? Don't know about classification schemes in particular, but there's ffnet for neural networks: http://ffnet.sourceforge.net/ -- Jonathan Niehof ISR-3 Space Data Systems Los Alamos National Laboratory MS-D466 Los Alamos, NM 87545 Phone: 505-667-9595 email: jniehof at lanl.gov Correspondence / Technical data or Software Publicly Available From L.Ulferts at hs-osnabrueck.de Wed Jul 25 14:54:35 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 25 Jul 2012 18:54:35 -0000 Subject: [SciPy-User] Classification using neural networks Message-ID: <20120725185435.23349.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From zachary.pincus at yale.edu Wed Jul 25 15:17:23 2012 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 25 Jul 2012 15:17:23 -0400 Subject: [SciPy-User] Classification using neural networks In-Reply-To: <77F5D06112589B4BB08A7C881381AB7604D338CD@ex10mb06.ad.uky.edu> References: <77F5D06112589B4BB08A7C881381AB7604D338CD@ex10mb06.ad.uky.edu> Message-ID: <46B67BDB-18B3-4CBA-B5D0-8F515F2A2ABB@yale.edu> Hi Brandon, Last time I was current with machine learning (ca. 5-7 years ago), the standard advice for the first pass at any particular problem was "throw it at an SVM". I don't know if that's still the go-to consensus these days -- can anyone else weight in? (Does some kind of ensemble method routinely beat SVMs these days in the same way that SVMs were routinely beating neural networks in the early 2000s? I guess I should check out abstracts from recent NIPS conferences to find out...) Anyway, my suggestion would be to use libSVM (which has handy command-line programs, or python bindings), either from the authors: http://www.csie.ntu.edu.tw/~cjlin/libsvm/ or as part of the scikits-learn package (which has other classification algorithms rolled in too): http://scikit-learn.org/stable/ Here is "A practical guide to support vector classification" by the authors of libSVM, which provides best practices for trying to train a SVM from new data: http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf Also, libSVM supports multi-class classification (via a one-against-rest strategy IIRC), which looks like it's what you'll need. Though I note that determining percentages of rock types from continuous input data is more of a mixture modeling task, rather than classification (or even regression). Maybe you'd want to look into fitting gaussian (or non-gaussian) mixture models. For this, scikits-learn has various tools also. Zach On Jul 25, 2012, at 2:47 PM, Nuttall, Brandon C wrote: > Folks, > > I am looking for a method to do rock type identification using geophysical log data acquired from oil and gas wells. The geophysical log data are continuous recordings of a variety of bulk rock properties that change with the type and abundance of mineral constituents. What I want to do is classify the rock types (limestone, sandstone, shale, dolomite, etc) and their relative percentages. I remember reading an earlier SCIPY post about a neural network module that used training datasets to establish a classification scheme and then was run against unknown data. But, I can?t find the post and don?t remember the module name. Can anyone point me in the right direction? > > Thanks. > > Brandon Nuttall, KRPG-1364 > Kentucky Geological Survey > www.uky.edu/kgs > bnuttall at uky.edu (KGS, Mo-We) > Brandon.nuttall at ky.gov (EEC,Th-Fr) > 859-323-0544 > 859-684-7473 (cell) > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From L.Ulferts at hs-osnabrueck.de Wed Jul 25 15:17:43 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 25 Jul 2012 19:17:43 -0000 Subject: [SciPy-User] Classification using neural networks Message-ID: <20120725191743.4845.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From andrew_giessel at hms.harvard.edu Wed Jul 25 15:28:01 2012 From: andrew_giessel at hms.harvard.edu (Andrew Giessel) Date: Wed, 25 Jul 2012 15:28:01 -0400 Subject: [SciPy-User] Classification using neural networks In-Reply-To: <20120725191743.4845.qmail@helios.ze.fh-osnabrueck.de> References: <20120725191743.4845.qmail@helios.ze.fh-osnabrueck.de> Message-ID: scikit-learn is probably a great place to start: http://scikit-learn.org/stable/ Many algorithms for classification, including NNs. ag 2012/7/25 > Ich befinde mich im Urlaub. > Ihre Nachricht wird nicht weitergereicht, > sondern ab dem 14. August von mir pers?nlich > bearbeitet werden. In dringenden F?llen k?nnen > Sie sich an meine Kollegen Herrn G?nterberg und > Roetmann oder meinen Fachvorgesetzten Herrn Taeger > wenden. > > Lothar Ulferts > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Andrew Giessel, PhD Department of Neurobiology, Harvard Medical School 220 Longwood Ave Boston, MA 02115 ph: 617.432.7971 email: andrew_giessel at hms.harvard.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.Ulferts at hs-osnabrueck.de Wed Jul 25 15:28:21 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 25 Jul 2012 19:28:21 -0000 Subject: [SciPy-User] Classification using neural networks Message-ID: <20120725192821.11027.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From marius.cobzarenco at gmail.com Tue Jul 24 11:57:39 2012 From: marius.cobzarenco at gmail.com (Marius) Date: Tue, 24 Jul 2012 08:57:39 -0700 (PDT) Subject: [SciPy-User] ANN: pandas 0.8.1 released In-Reply-To: References: Message-ID: <70569165-a050-45fd-8e90-c533027e7e63@googlegroups.com> Exciting stuff as always, congrats Wes - Marius On Sunday, July 22, 2012 9:26:12 PM UTC+1, Wes McKinney wrote: > > hi all, > > I'm pleased to announce the 0.8.1 release of pandas. This is > primarily a bugfix release, though it includes a number of useful > new features and performance improvements that make it an > immediate recommended upgrade for all pandas users. Some highlights: > > - Vectorized, NA-friendly string > methods (e.g. series.str.replace('foo', 'bar')) > - Improved plotting support for time series data > - Improved styling capability for DataFrame plots (specifying a > dict of styles) and plotting one column versus another > - Several new plot types developed as part of GSoC 2012 (see > http://pandasplotting.blogspot.com/ for on going developments > - Significantly improved parsing performance of ISO8601 datetime strings > > Thanks to all who contributed to this release, especially Chang > She and Vytautas Jan?auskas. As always source archives and Windows > installers > can be found on PyPI. > > What's new: http://pandas.pydata.org/pandas-docs/stable/whatsnew.html > > $ git log v0.8.0..v0.8.1 --pretty=format:%aN | sort | uniq -c | sort -rn > 77 Wes McKinney > 39 Chang She > 11 Vytautas Jancauskas > 2 Todd DeLuca > 2 Skipper Seabold > > Happy data hacking! > > - Wes > > What is it > ========== > pandas is a Python package providing fast, flexible, and > expressive data structures designed to make working with > relational, time series, or any other kind of labeled data both > easy and intuitive. It aims to be the fundamental high-level > building block for doing practical, real world data analysis in > Python. > > Links > ===== > Release Notes: http://github.com/pydata/pandas/blob/master/RELEASE.rst > Documentation: http://pandas.pydata.org > Installers: http://pypi.python.org/pypi/pandas > Code Repository: http://github.com/pydata/pandas > Mailing List: http://groups.google.com/group/pydata > -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.Ulferts at hs-osnabrueck.de Wed Jul 25 16:33:44 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 25 Jul 2012 20:33:44 -0000 Subject: [SciPy-User] ANN: pandas 0.8.1 released Message-ID: <20120725203344.17801.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From pav at iki.fi Wed Jul 25 18:07:35 2012 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 25 Jul 2012 22:07:35 +0000 (UTC) Subject: [SciPy-User] nan handling by scipy.optimize References: <500EF1E9.3090402@gmail.com> Message-ID: Alan G Isaac gmail.com> writes: > Has there been any recent change in nan handling in scipy.optimize? > I'm using a very recent EPD distribution, and a student > is using a release with Python 2.6.5. I'm getting a > zero returned by brentq and bisect when (iiuc) the function > evaluation produces a nan, while he is not. > The search interval is bounded away from zero. There are no code changes in Scipy 0.8.0b1...v0.10.1 (which are the versions I guess apply for your issue, please clarify) for the `bisect` and `brentq` routines. -- Pauli Virtanen From L.Ulferts at hs-osnabrueck.de Wed Jul 25 18:08:07 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 25 Jul 2012 22:08:07 -0000 Subject: [SciPy-User] ***-SPAM-*** Re: nan handling by scipy.optimize Message-ID: <20120725220807.4993.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From alan.isaac at gmail.com Wed Jul 25 20:21:40 2012 From: alan.isaac at gmail.com (Alan G Isaac) Date: Wed, 25 Jul 2012 20:21:40 -0400 Subject: [SciPy-User] nan handling by scipy.optimize In-Reply-To: References: <500EF1E9.3090402@gmail.com> Message-ID: <50108D94.2030602@gmail.com> On 7/25/2012 6:07 PM, Pauli Virtanen wrote: > There are no code changes in Scipy 0.8.0b1...v0.10.1 (which are > the versions I guess apply for your issue, please clarify) for > the `bisect` and `brentq` routines. Thank you for confirming. I am using 10.1. I'll check my student's version. My student is using the EPD for Mac, the distro with Python 2.6.5. I'll seek more details if needed. I encountered two oddities. 1. The same code behaved differently in the two settings. 2. brentq was returning a zero even though that was outside the bound we provided (1e-04,10). Should brentq be able to return a number outside the bound? Is there any unusual circumstance that might cause brentq to return a 0? If I changed the bound to (1e-02,10) so that it was bounded slightly slightly farther than zero, then I was able to get a good match to my student's results. (Again, changing only this but otherwise using the *same* code.) Thanks, Alan From L.Ulferts at hs-osnabrueck.de Wed Jul 25 20:21:54 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 26 Jul 2012 00:21:54 -0000 Subject: [SciPy-User] nan handling by scipy.optimize Message-ID: <20120726002154.3295.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From gael.varoquaux at normalesup.org Thu Jul 26 03:04:08 2012 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 26 Jul 2012 09:04:08 +0200 Subject: [SciPy-User] Classification using neural networks In-Reply-To: References: <20120725191743.4845.qmail@helios.ze.fh-osnabrueck.de> Message-ID: <20120726070408.GA644@phare.normalesup.org> On Wed, Jul 25, 2012 at 03:28:01PM -0400, Andrew Giessel wrote: > scikit-learn is probably a great place to start: > http://scikit-learn.org/stable/ > Many algorithms for classification, including NNs. Well, it has a perceptron implementation: http://scikit-learn.org/dev/modules/generated/sklearn.linear_model.Perceptron.html but not any multilayer-perceptron[*]. Thus, I don't really think that we can claim that we have neural-network. That said, they are so 1990's :) Gael [*] A GSOC was suppose to implement this, this year, but the student finally went for an internship at Google. From L.Ulferts at hs-osnabrueck.de Thu Jul 26 03:04:21 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 26 Jul 2012 07:04:21 -0000 Subject: [SciPy-User] Classification using neural networks Message-ID: <20120726070421.7578.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From gael.varoquaux at normalesup.org Thu Jul 26 03:08:33 2012 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 26 Jul 2012 09:08:33 +0200 Subject: [SciPy-User] Classification using neural networks In-Reply-To: <46B67BDB-18B3-4CBA-B5D0-8F515F2A2ABB@yale.edu> References: <77F5D06112589B4BB08A7C881381AB7604D338CD@ex10mb06.ad.uky.edu> <46B67BDB-18B3-4CBA-B5D0-8F515F2A2ABB@yale.edu> Message-ID: <20120726070833.GB644@phare.normalesup.org> On Wed, Jul 25, 2012 at 03:17:23PM -0400, Zachary Pincus wrote: > Last time I was current with machine learning (ca. 5-7 years ago), the standard advice for the first pass at any particular problem was "throw it at an SVM". I think that this is still a good advice. I would say: standardise your data (for each feature: mean = 0, norm = 1) and throw it into an SVM. > I don't know if that's still the go-to consensus these days -- can anyone else weight in? (Does some kind of ensemble method routinely beat SVMs these days in the same way that SVMs were routinely beating neural networks in the early 2000s? If you have heaps of data, you can try random forests or gradient boosted trees, that work very well. The scikit-learn has good implementation of all these algorithms, but not neural-network: they are too old fashion for hip coders to contribute them ;) > I guess I should check out abstracts from recent NIPS conferences to find out...) Nah, NIPS is about mathturbation, not things that work on real data, these days. Gael From L.Ulferts at hs-osnabrueck.de Thu Jul 26 03:08:43 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 26 Jul 2012 07:08:43 -0000 Subject: [SciPy-User] Classification using neural networks Message-ID: <20120726070843.11658.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From sturla at molden.no Thu Jul 26 12:26:55 2012 From: sturla at molden.no (Sturla Molden) Date: Thu, 26 Jul 2012 18:26:55 +0200 Subject: [SciPy-User] Classification using neural networks In-Reply-To: <20120726070408.GA644@phare.normalesup.org> References: <20120725191743.4845.qmail@helios.ze.fh-osnabrueck.de> <20120726070408.GA644@phare.normalesup.org> Message-ID: <50116FCF.7000003@molden.no> Den 26.07.2012 09:04, skrev Gael Varoquaux: > > Well, it has a perceptron implementation: > http://scikit-learn.org/dev/modules/generated/sklearn.linear_model.Perceptron.html > but not any multilayer-perceptron[*]. Thus, I don't really think that we can > claim that we have neural-network. That said, they are so 1990's :) > Yeah, it seems that SVMs are more fashionable than ANNs these days. I don't know why that is, SVMs are slow to train and use, and I have yet to see that they out-perform an ANN. Perhaps it's because the latest edition of Numerical Receipes favour them over AANs, because SVMs supposedly are more transparent and easier to understand (I beg to differ). Multilayer ANNs trained with Levenberg-Marquardt and error backpropagation are among the most powerful non-linear regression and classification tools there are. And by the way, SciPy already has an LM-engine to train one (scipy.optimize.leastsq), all it takes is the code to compute the Jacobian by backpropagation. Sturla From L.Ulferts at hs-osnabrueck.de Thu Jul 26 12:27:17 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 26 Jul 2012 16:27:17 -0000 Subject: [SciPy-User] Classification using neural networks Message-ID: <20120726162717.12976.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From cournape at gmail.com Thu Jul 26 12:38:36 2012 From: cournape at gmail.com (David Cournapeau) Date: Thu, 26 Jul 2012 17:38:36 +0100 Subject: [SciPy-User] Classification using neural networks In-Reply-To: <50116FCF.7000003@molden.no> References: <20120725191743.4845.qmail@helios.ze.fh-osnabrueck.de> <20120726070408.GA644@phare.normalesup.org> <50116FCF.7000003@molden.no> Message-ID: On Thu, Jul 26, 2012 at 5:26 PM, Sturla Molden wrote: > Den 26.07.2012 09:04, skrev Gael Varoquaux: >> >> Well, it has a perceptron implementation: >> http://scikit-learn.org/dev/modules/generated/sklearn.linear_model.Perceptron.html >> but not any multilayer-perceptron[*]. Thus, I don't really think that we can >> claim that we have neural-network. That said, they are so 1990's :) >> > > Yeah, it seems that SVMs are more fashionable than ANNs these days. I > don't know why that is, SVMs are slow to train and use, and I have yet > to see that they out-perform an ANN. Perhaps it's because the latest > edition of Numerical Receipes favour them over AANs, because SVMs > supposedly are more transparent and easier to understand (I beg to > differ). Multilayer ANNs trained with Levenberg-Marquardt and error > backpropagation are among the most powerful non-linear regression and > classification tools there are. And by the way, SciPy already has an > LM-engine to train one (scipy.optimize.leastsq), all it takes is the > code to compute the Jacobian by backpropagation. I find Vapnik work on structured risk minimization to be one of the crown jewel of machine learning (or statistics for that matter), and would like to believe it is one of the reason why it is/was populat. ANN also got a bad press because of the history - mentioning neural network in your publication was a almost-sure way to get your paper considered badly a couple of years ago I think. The focus on one technique in particular is fundamentally wrong, I think (no free lunch and all that). It all depends on your data and what you're doing, the "use technique X" that sees X changed every few years is closer to pop culture than science IMO. David From L.Ulferts at hs-osnabrueck.de Thu Jul 26 12:39:00 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 26 Jul 2012 16:39:00 -0000 Subject: [SciPy-User] Classification using neural networks Message-ID: <20120726163900.21218.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From michf at post.tau.ac.il Thu Jul 26 01:33:08 2012 From: michf at post.tau.ac.il (Micha) Date: Thu, 26 Jul 2012 08:33:08 +0300 Subject: [SciPy-User] nan handling by scipy.optimize In-Reply-To: References: <500EF1E9.3090402@gmail.com> Message-ID: On what computers are you and the student running? The are sometimes changes in hardware, especially with optimized code that uses sse and/or fused multiply add Pauli Virtanen wrote: >Alan G Isaac gmail.com> writes: >> Has there been any recent change in nan handling in scipy.optimize? >> I'm using a very recent EPD distribution, and a student >> is using a release with Python 2.6.5. I'm getting a >> zero returned by brentq and bisect when (iiuc) the function >> evaluation produces a nan, while he is not. >> The search interval is bounded away from zero. > >There are no code changes in Scipy 0.8.0b1...v0.10.1 (which are >the versions I guess apply for your issue, please clarify) for >the `bisect` and `brentq` routines. > >-- >Pauli Virtanen > >_______________________________________________ >SciPy-User mailing list >SciPy-User at scipy.org >http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.Ulferts at hs-osnabrueck.de Thu Jul 26 13:05:16 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 26 Jul 2012 17:05:16 -0000 Subject: [SciPy-User] nan handling by scipy.optimize Message-ID: <20120726170516.7431.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From gael.varoquaux at normalesup.org Thu Jul 26 13:07:57 2012 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 26 Jul 2012 19:07:57 +0200 Subject: [SciPy-User] Classification using neural networks In-Reply-To: <50116FCF.7000003@molden.no> References: <20120725191743.4845.qmail@helios.ze.fh-osnabrueck.de> <20120726070408.GA644@phare.normalesup.org> <50116FCF.7000003@molden.no> Message-ID: <20120726170757.GO15510@phare.normalesup.org> On Thu, Jul 26, 2012 at 06:26:55PM +0200, Sturla Molden wrote: > I don't know why that is, SVMs are slow to train and use, Hum. I guess it depends in which settings, but I disagree: well implemented they are very fast in many settings. > and I have yet to see that they out-perform an ANN. They are just way easier to set up and tune. Seting up and tuning an ANN can be a bit black magic. > Perhaps it's because the latest edition of Numerical Receipes favour > them over AANs, In the machine learning community (which I know well) this is certainly not an argument. These people are not afraid of implementing complex algorithms. > because SVMs supposedly are more transparent and easier to understand > (I beg to differ). I actually agree with NRs here. > Multilayer ANNs trained with Levenberg-Marquardt and error > backpropagation are among the most powerful non-linear regression and > classification tools there are. Granted for non-linearity, but most high-dimensional problem are well solved with linear model. > And by the way, SciPy already has an LM-engine to train one > (scipy.optimize.leastsq), all it takes is the code to compute the > Jacobian by backpropagation. Yeah, well, that's a bit of work, isn't it :) Gael From L.Ulferts at hs-osnabrueck.de Thu Jul 26 13:08:12 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 26 Jul 2012 17:08:12 -0000 Subject: [SciPy-User] Classification using neural networks Message-ID: <20120726170812.9329.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From jsseabold at gmail.com Thu Jul 26 13:11:29 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 26 Jul 2012 13:11:29 -0400 Subject: [SciPy-User] mailing list etiquette [Was: Re: nan handling by scipy.optimize] Message-ID: Are there list etiquette suggestions posted anywhere? Out of office replies to mailing lists are one of my pet peeves. I think we all receive enough e-mail, wanted and otherwise. 2012/7/26 : > Ich befinde mich im Urlaub. > Ihre Nachricht wird nicht weitergereicht, > sondern ab dem 14. August von mir pers?nlich > bearbeitet werden. In dringenden F?llen k?nnen > Sie sich an meine Kollegen Herrn G?nterberg und > Roetmann oder meinen Fachvorgesetzten Herrn Taeger > wenden. > > Lothar Ulferts > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From L.Ulferts at hs-osnabrueck.de Thu Jul 26 13:12:05 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 26 Jul 2012 17:12:05 -0000 Subject: [SciPy-User] mailing list etiquette [Was: Re: nan handling by scipy.optimize] Message-ID: <20120726171205.12034.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From sturla at molden.no Thu Jul 26 13:29:01 2012 From: sturla at molden.no (Sturla Molden) Date: Thu, 26 Jul 2012 19:29:01 +0200 Subject: [SciPy-User] Classification using neural networks In-Reply-To: References: <20120725191743.4845.qmail@helios.ze.fh-osnabrueck.de> <20120726070408.GA644@phare.normalesup.org> <50116FCF.7000003@molden.no> Message-ID: <50117E5D.9080902@molden.no> Den 26.07.2012 18:38, skrev David Cournapeau: > I find Vapnik work on structured risk minimization to be one of the > crown jewel of machine learning (or statistics for that matter), and > would like to believe it is one of the reason why it is/was populat. > ANN also got a bad press because of the history - mentioning neural > network in your publication was a almost-sure way to get your paper > considered badly a couple of years ago I think. They got hyped by neuroscientists who thought more in terms of "artificial brain" than statistics. In reality, multilayer perceptrons are just a generalization of linear models with logistic or gaussian link function. > The focus on one technique in particular is fundamentally wrong, I > think (no free lunch and all that). It all depends on your data and > what you're doing, the "use technique X" that sees X changed every few > years is closer to pop culture than science IMO. As with any statistical tool, blind application is never a good idea. Sturla From L.Ulferts at hs-osnabrueck.de Thu Jul 26 13:29:09 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 26 Jul 2012 17:29:09 -0000 Subject: [SciPy-User] Classification using neural networks Message-ID: <20120726172909.24216.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From njs at pobox.com Thu Jul 26 13:30:42 2012 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 26 Jul 2012 18:30:42 +0100 Subject: [SciPy-User] mailing list etiquette [Was: Re: nan handling by scipy.optimize] In-Reply-To: References: Message-ID: 2012/7/26 Skipper Seabold : > Are there list etiquette suggestions posted anywhere? Out of office > replies to mailing lists are one of my pet peeves. I think we all > receive enough e-mail, wanted and otherwise. Not sure what good posting etiquette suggestions would do... these people are using buggy software to send spam. (Accidentally, but still.) I think pretty much everyone knows that's a bad idea, they're just oblivious. The traditional vacation program has extensive checks built in to ensure that oblivious people won't create this kind of nonsense: http://man.cx/vacation%281%29 but sadly this wisdom has been lost. Honestly the best solution is probably for the list admins to kick anyone who sends a vacation message off the list. Then their vacation messages will start bouncing and they can re-subscribe when they get back. -n From L.Ulferts at hs-osnabrueck.de Thu Jul 26 13:30:54 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 26 Jul 2012 17:30:54 -0000 Subject: [SciPy-User] mailing list etiquette [Was: Re: nan handling by scipy.optimize] Message-ID: <20120726173054.25424.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From sturla at molden.no Thu Jul 26 13:38:45 2012 From: sturla at molden.no (Sturla Molden) Date: Thu, 26 Jul 2012 19:38:45 +0200 Subject: [SciPy-User] Classification using neural networks In-Reply-To: <20120726170757.GO15510@phare.normalesup.org> References: <20120725191743.4845.qmail@helios.ze.fh-osnabrueck.de> <20120726070408.GA644@phare.normalesup.org> <50116FCF.7000003@molden.no> <20120726170757.GO15510@phare.normalesup.org> Message-ID: <501180A5.9020608@molden.no> Den 26.07.2012 19:07, skrev Gael Varoquaux: > Yeah, well, that's a bit of work, isn't it :) Not really :) See e.g. here, equation 12.36: http://www.eng.auburn.edu/~wilambm/pap/2011/K10149_C012.pdf Sturla From L.Ulferts at hs-osnabrueck.de Thu Jul 26 13:38:48 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 26 Jul 2012 17:38:48 -0000 Subject: [SciPy-User] Classification using neural networks Message-ID: <20120726173848.30440.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From jsseabold at gmail.com Thu Jul 26 13:39:04 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Thu, 26 Jul 2012 13:39:04 -0400 Subject: [SciPy-User] mailing list etiquette [Was: Re: nan handling by scipy.optimize] In-Reply-To: References: Message-ID: On Thu, Jul 26, 2012 at 1:30 PM, Nathaniel Smith wrote: > 2012/7/26 Skipper Seabold : >> Are there list etiquette suggestions posted anywhere? Out of office >> replies to mailing lists are one of my pet peeves. I think we all >> receive enough e-mail, wanted and otherwise. > > Not sure what good posting etiquette suggestions would do... these > people are using buggy software to send spam. (Accidentally, but > still.) I think pretty much everyone knows that's a bad idea, they're > just oblivious. The traditional vacation program has extensive checks > built in to ensure that oblivious people won't create this kind of > nonsense: > http://man.cx/vacation%281%29 > but sadly this wisdom has been lost. > > Honestly the best solution is probably for the list admins to kick > anyone who sends a vacation message off the list. Then their vacation > messages will start bouncing and they can re-subscribe when they get > back. > Right. This is the solution I'd like. Just thought it a good idea to have etiquette posted somewhere stating that this action would be taken in the event the suggestions are ignored. Skipper From L.Ulferts at hs-osnabrueck.de Thu Jul 26 13:39:37 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 26 Jul 2012 17:39:37 -0000 Subject: [SciPy-User] mailing list etiquette [Was: Re: nan handling by scipy.optimize] Message-ID: <20120726173937.31094.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From sturla at molden.no Thu Jul 26 13:44:46 2012 From: sturla at molden.no (Sturla Molden) Date: Thu, 26 Jul 2012 19:44:46 +0200 Subject: [SciPy-User] mailing list etiquette [Was: Re: nan handling by scipy.optimize] In-Reply-To: References: Message-ID: <5011820E.1060400@molden.no> Den 26.07.2012 19:39, skrev Skipper Seabold: > Right. This is the solution I'd like. Just thought it a good idea to > have etiquette posted somewhere stating that this action would be > taken in the event the suggestions are ignored. > > That should be self evident. Out-of-office replies to a mailing list is littering. Sturla From L.Ulferts at hs-osnabrueck.de Thu Jul 26 13:44:49 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 26 Jul 2012 17:44:49 -0000 Subject: [SciPy-User] mailing list etiquette [Was: Re: nan handling by scipy.optimize] Message-ID: <20120726174449.2458.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From alan.isaac at gmail.com Thu Jul 26 14:33:13 2012 From: alan.isaac at gmail.com (Alan G Isaac) Date: Thu, 26 Jul 2012 14:33:13 -0400 Subject: [SciPy-User] nan handling by scipy.optimize In-Reply-To: References: <500EF1E9.3090402@gmail.com> Message-ID: <50118D69.9060903@gmail.com> On 7/26/2012 1:33 AM, Micha wrote: > On what computers are you and the student running? The are sometimes changes in hardware, especially with optimized code that uses sse and/or fused > multiply add My student is using 32bit EPD with scipy version 0.8.0 on a Mac. I am using the epd-7.3-1-win-x86.msi installer under 64bit Windows, which has SciPy 0.10.1. My background question remains: should brentq(f, 1e-04, 10) ever be able to return 0? Alan Isaac From L.Ulferts at hs-osnabrueck.de Thu Jul 26 14:33:29 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 26 Jul 2012 18:33:29 -0000 Subject: [SciPy-User] nan handling by scipy.optimize Message-ID: <20120726183329.2672.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From jr at sun.ac.za Thu Jul 26 15:51:56 2012 From: jr at sun.ac.za (Johann Rohwer) Date: Thu, 26 Jul 2012 21:51:56 +0200 Subject: [SciPy-User] Scipy test errors Message-ID: <50119FDC.70704@sun.ac.za> The latest SciPy from GIT gives a whole bunch of test errors and failures, most of the in the test_arpack module: Ran 5483 tests in 78.064s FAILED (KNOWNFAIL=13, SKIP=24, errors=8, failures=63) Out[4]: Is this to be expected? The official release 0.11.0rc1 gives slightly fewer, but still a significant number of failures (57). OS: Linux Ubuntu 12.04 amd64 Self compiled from GIT against Ubuntu shipped atlas, amd and umfpack libraries. In [7]: numpy.__version__ Out[7]: '1.8.0.dev-b74c6be' In [8]: scipy.__version__ Out[8]: '0.12.0.dev-8e918cd' --Johann From jr at sun.ac.za Thu Jul 26 15:56:34 2012 From: jr at sun.ac.za (Johann Rohwer) Date: Thu, 26 Jul 2012 21:56:34 +0200 Subject: [SciPy-User] scipy.pkgload() error Message-ID: <5011A0F2.9090708@sun.ac.za> After upgrading my scipy installation to the latest source form git, scipy.pkgload() no longer works and gives the following traceback: In [2]: scipy.pkgload() --------------------------------------------------------------------------- NameError Traceback (most recent call last) in () ----> 1 scipy.pkgload() /usr/local/lib/python2.7/dist-packages/numpy/__init__.pyc in pkgload(*packages, **options) 132 133 def pkgload(*packages, **options): --> 134 loader = PackageLoader(infunc=True) 135 return loader(*packages, **options) 136 /usr/local/lib/python2.7/dist-packages/numpy/_import_tools.pyc in __init__(self, verbose, infunc) 15 self.parent_frame = frame = sys._getframe(_level) 16 self.parent_name = eval('__name__',frame.f_globals,frame.f_locals) ---> 17 parent_path = eval('__path__',frame.f_globals,frame.f_locals) 18 if isinstance(parent_path, str): 19 parent_path = [parent_path] in () NameError: name '__path__' is not defined In [3]: scipy.__version__ Out[3]: '0.12.0.dev-8e918cd' ------------------------------------------------------------------------------ I see the pkgload() method is actually called from within numpy (and not scipy, although it's called from scipy), but am not sure whether this is expected. Any ideas? --Johann From ralf.gommers at googlemail.com Thu Jul 26 15:57:30 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 26 Jul 2012 21:57:30 +0200 Subject: [SciPy-User] Scipy test errors In-Reply-To: <50119FDC.70704@sun.ac.za> References: <50119FDC.70704@sun.ac.za> Message-ID: On Thu, Jul 26, 2012 at 9:51 PM, Johann Rohwer wrote: > The latest SciPy from GIT gives a whole bunch of test errors and failures, > most of the in the test_arpack module: > > Ran 5483 tests in 78.064s > > FAILED (KNOWNFAIL=13, SKIP=24, errors=8, failures=63) > Out[4]: > > Is this to be expected? The official release 0.11.0rc1 gives slightly > fewer, > but still a significant number of failures (57). > That's not expected. Could you please send us the output of the test run? And with what command did you install, and what Python version? Ralf > > OS: Linux Ubuntu 12.04 amd64 > Self compiled from GIT against Ubuntu shipped atlas, amd and umfpack > libraries. > > In [7]: numpy.__version__ > Out[7]: '1.8.0.dev-b74c6be' > > In [8]: scipy.__version__ > Out[8]: '0.12.0.dev-8e918cd' > > > --Johann > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.Ulferts at hs-osnabrueck.de Thu Jul 26 15:57:40 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 26 Jul 2012 19:57:40 -0000 Subject: [SciPy-User] Scipy test errors Message-ID: <20120726195740.29987.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From jr at sun.ac.za Thu Jul 26 16:13:57 2012 From: jr at sun.ac.za (Johann Rohwer) Date: Thu, 26 Jul 2012 22:13:57 +0200 Subject: [SciPy-User] Scipy test errors In-Reply-To: References: <50119FDC.70704@sun.ac.za> Message-ID: <5011A505.2020209@sun.ac.za> On 26/07/2012 21:57, Ralf Gommers wrote: > > > On Thu, Jul 26, 2012 at 9:51 PM, Johann Rohwer > wrote: > > The latest SciPy from GIT gives a whole bunch of test errors and failures, > most of the in the test_arpack module: > > Ran 5483 tests in 78.064s > > FAILED (KNOWNFAIL=13, SKIP=24, errors=8, failures=63) > Out[4]: > > Is this to be expected? The official release 0.11.0rc1 gives slightly fewer, > but still a significant number of failures (57). > > > That's not expected. Could you please send us the output of the test run? > And with what command did you install, and what Python version? > > Ralf Test output below, version info included. Installation was with $ python setup.py build $ sudo python setup.py install I made sure to clean out the build directory first, and remove any scipy* files/dirs from the dist-packages directory. ------------------------ test output below ------------------------------- In [3]: scipy.test() Running unit tests for scipy NumPy version 1.8.0.dev-b74c6be NumPy is installed in /usr/local/lib/python2.7/dist-packages/numpy SciPy version 0.12.0.dev-8e918cd SciPy is installed in /usr/local/lib/python2.7/dist-packages/scipy Python version 2.7.3 (default, Apr 20 2012, 22:39:59) [GCC 4.6.3] nose version 1.1.2 ..............................................................................................................................................................................................................................K........................................................................................................K..................................................................K..K...............................................................................................................................................................................................F......................................................................................................................................................................................................................................................................................................................................................................................................................... .............................................................SSSSSS......SSSSSS......SSSS..............................................................................................................................................................................................................................................................................................................................................................................K............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... ..........................................................................................................................................................................................................................................................................................................................................F...........................................................................................................................................................................................................................................................................................................................................................................................FF.F.F..FF..FF.FFF..FF......F...F...F..F.F..F..............FFF.FFF.FF...FF..FF..FF.............FF..FF..FFF.....................F....F.......F..............FFF...F..FF.............F...F...F....F...F...F..............FFF.FFF.FFF............................................................... ...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................K..................................................................K................................................................................................................................................................KK...................................................................................................................................................................................... .................................................................................................................................................................................................................................................................K.K.............................................................................................................................................................................................................................................................................................................................................................................................K........K..............SSSSSSS..............................................................................................................................EEEE....EEEE..................................S.......................................................................................................................................................... ........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................ ====================================================================== ERROR: test_discrete_basic.test_discrete_basic(, (5, 0.5), 'nbinom cdf_ppf') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/tests/test_discrete_basic.py", line 104, in check_cdf_ppf ppf05 = distfn.ppf(0.5,*arg) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/distributions.py", line 6132, in ppf place(output,cond,self._ppf(*goodargs) + loc) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/distributions.py", line 6633, in _ppf vals = ceil(special.nbdtrik(q,n,p)) RuntimeWarning: overflow encountered in nbdtrik ====================================================================== ERROR: test_discrete_basic.test_discrete_basic(, (5, 0.5), array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 21]), 'nbinom cdf_ppf') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/tests/test_discrete_basic.py", line 111, in check_cdf_ppf2 npt.assert_array_equal(distfn.ppf(distfn.cdf(supp,*arg),*arg), File "/usr/local/lib/python2.7/dist-packages/scipy/stats/distributions.py", line 6132, in ppf place(output,cond,self._ppf(*goodargs) + loc) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/distributions.py", line 6633, in _ppf vals = ceil(special.nbdtrik(q,n,p)) RuntimeWarning: overflow encountered in nbdtrik ====================================================================== ERROR: test_discrete_basic.test_discrete_basic(, (5, 0.5), 'nbinom pmf_cdf') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/tests/test_discrete_basic.py", line 136, in check_pmf_cdf startind = np.int(distfn._ppf(0.01,*arg)-1) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/distributions.py", line 6633, in _ppf vals = ceil(special.nbdtrik(q,n,p)) RuntimeWarning: overflow encountered in nbdtrik ====================================================================== ERROR: test_discrete_basic.test_discrete_basic(, (5, 0.5), 'nbinom oth') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/tests/test_discrete_basic.py", line 156, in check_oth median_sf = distfn.isf(0.5, *arg) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/distributions.py", line 6184, in isf place(output,cond,self._isf(*goodargs) + loc) #PB same as ticket 766 File "/usr/local/lib/python2.7/dist-packages/scipy/stats/distributions.py", line 5828, in _isf return self._ppf(1-q,*args) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/distributions.py", line 6633, in _ppf vals = ceil(special.nbdtrik(q,n,p)) RuntimeWarning: overflow encountered in nbdtrik ====================================================================== ERROR: test_discrete_basic.test_discrete_basic(, (0.4, 0.4), 'nbinom cdf_ppf') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/tests/test_discrete_basic.py", line 104, in check_cdf_ppf ppf05 = distfn.ppf(0.5,*arg) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/distributions.py", line 6132, in ppf place(output,cond,self._ppf(*goodargs) + loc) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/distributions.py", line 6633, in _ppf vals = ceil(special.nbdtrik(q,n,p)) RuntimeWarning: overflow encountered in nbdtrik ====================================================================== ERROR: test_discrete_basic.test_discrete_basic(, (0.4, 0.4), array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12]), 'nbinom cdf_ppf') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/tests/test_discrete_basic.py", line 111, in check_cdf_ppf2 npt.assert_array_equal(distfn.ppf(distfn.cdf(supp,*arg),*arg), File "/usr/local/lib/python2.7/dist-packages/scipy/stats/distributions.py", line 6132, in ppf place(output,cond,self._ppf(*goodargs) + loc) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/distributions.py", line 6633, in _ppf vals = ceil(special.nbdtrik(q,n,p)) RuntimeWarning: overflow encountered in nbdtrik ====================================================================== ERROR: test_discrete_basic.test_discrete_basic(, (0.4, 0.4), 'nbinom pmf_cdf') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/tests/test_discrete_basic.py", line 136, in check_pmf_cdf startind = np.int(distfn._ppf(0.01,*arg)-1) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/distributions.py", line 6633, in _ppf vals = ceil(special.nbdtrik(q,n,p)) RuntimeWarning: overflow encountered in nbdtrik ====================================================================== ERROR: test_discrete_basic.test_discrete_basic(, (0.4, 0.4), 'nbinom oth') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/tests/test_discrete_basic.py", line 156, in check_oth median_sf = distfn.isf(0.5, *arg) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/distributions.py", line 6184, in isf place(output,cond,self._isf(*goodargs) + loc) #PB same as ticket 766 File "/usr/local/lib/python2.7/dist-packages/scipy/stats/distributions.py", line 5828, in _isf return self._ppf(1-q,*args) File "/usr/local/lib/python2.7/dist-packages/scipy/stats/distributions.py", line 6633, in _ppf vals = ceil(special.nbdtrik(q,n,p)) RuntimeWarning: overflow encountered in nbdtrik ====================================================================== FAIL: test_mio.test_mat4_3d ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/io/matlab/tests/test_mio.py", line 771, in test_mat4_3d stream, {'a': arr}, True, '4') File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1018, in assert_raises return nose.tools.assert_raises(*args,**kwargs) AssertionError: DeprecationWarning not raised ====================================================================== FAIL: Regression test for #651: better handling of badly conditioned ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/scipy/signal/tests/test_filter_design.py", line 34, in test_bad_filter assert_raises(BadCoefficients, tf2zpk, [1e-15], [1.0, 1.0]) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1018, in assert_raises return nose.tools.assert_raises(*args,**kwargs) AssertionError: BadCoefficients not raised ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:standard, typ=f, which=LM, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 3.538e-01, -5.100e+04], [ -1.602e-01, -5.268e+04], [ 1.852e-01, 4.530e+05],... y: array([[ 3.538e-01, -1.492e+05], [ -1.602e-01, 1.236e+05], [ 1.852e-01, 2.825e+06],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:standard, typ=f, which=LM, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[ 2.382e-01, -5.280e+07], [ -1.079e-01, 1.497e+06], [ 1.247e-01, 1.492e+07],... y: array([[ 2.382e-01, -6.108e+06], [ -1.079e-01, 2.459e+07], [ 1.247e-01, -1.863e+07],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:standard, typ=f, which=LM, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=normal (mismatch 100.0%) x: array([[-0.238, -0.176], [ 0.108, 0.215], [-0.125, 0.02 ],... y: array([[-0.238, -0.237], [ 0.108, 0.134], [-0.125, -0.022],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:standard, typ=f, which=LM, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[-0.238, -0.218], [ 0.108, 0.137], [-0.125, -0.039],... y: array([[-0.238, -0.233], [ 0.108, 0.135], [-0.125, -0.027],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.000357628, atol=0.000357628 error for eigsh:standard, typ=f, which=LM, sigma=0.5, mattype=asarray, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 3.538e-01, 6.766e+03], [ -1.602e-01, 1.806e+04], [ 1.852e-01, -2.741e+03],... y: array([[ 3.538e-01, 2.296e+05], [ -1.602e-01, 6.077e+05], [ 1.852e-01, -4.986e+04],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.000357628, atol=0.000357628 error for eigsh:standard, typ=f, which=LM, sigma=0.5, mattype=asarray, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[ -2.382e-01, -6.118e+07], [ 1.079e-01, -7.774e+07], [ -1.247e-01, -7.182e+07],... y: array([[ -2.382e-01, 9.311e+07], [ 1.079e-01, 2.382e+08], [ -1.247e-01, -9.534e+07],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:standard, typ=f, which=SM, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ -3.328e-02, -5.409e+05], [ -8.831e-02, 5.020e+05], [ 5.866e-03, 4.510e+05],... y: array([[ -3.328e-02, -8.350e+04], [ -8.831e-02, 7.853e+04], [ 5.866e-03, 7.305e+04],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:standard, typ=f, which=SM, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[ -3.875e-03, 9.705e+13], [ -1.028e-02, -1.171e+13], [ 6.831e-04, -2.009e+14],... y: array([[ -3.875e-03, 3.081e+15], [ -1.028e-02, 2.721e+15], [ 6.831e-04, -1.980e+16],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:standard, typ=f, which=SM, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=normal (mismatch 100.0%) x: array([[ 1.766, -0.638], [ 4.972, 0.271], [ 1.366, 0.485],... y: array([[ -32.268, -0.638], [ 13.58 , 0.271], [ -36.003, 0.485],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:standard, typ=f, which=SM, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[-0.033, 0.584], [-0.088, 0.797], [ 0.006, 0.794],... y: array([[-0.033, -0.023], [-0.088, -0.028], [ 0.006, 0.225],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:standard, typ=f, which=SM, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[ -3.875e-03, 2.041e+00], [ -1.028e-02, 5.741e+00], [ 6.828e-04, 1.570e+00],... y: array([[ -3.875e-03, 3.999e+01], [ -1.028e-02, 7.073e+01], [ 6.830e-04, -3.789e+02],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.000357628, atol=0.000357628 error for eigsh:standard, typ=f, which=SM, sigma=0.5, mattype=asarray, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 3.328e-02, -1.713e+06], [ 8.831e-02, 7.176e+05], [ -5.866e-03, 1.226e+06],... y: array([[ 3.328e-02, -1.012e+05], [ 8.831e-02, 4.151e+04], [ -5.866e-03, 6.448e+04],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.000357628, atol=0.000357628 error for eigsh:standard, typ=f, which=SM, sigma=0.5, mattype=asarray, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[ 3.875e-03, 1.460e+11], [ 1.028e-02, 9.416e+11], [ -6.831e-04, -4.474e+12],... y: array([[ 3.875e-03, 7.928e+13], [ 1.028e-02, 1.029e+14], [ -6.831e-04, -6.526e+14],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:standard, typ=f, which=LA, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[-1.052, 0.564], [-1.467, -0.239], [-0.998, -0.429],... y: array([[-0.625, 0.564], [-1.691, -0.239], [-0.865, -0.429],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'LA', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.000357628, atol=0.000357628 error for eigsh:standard, typ=f, which=LA, sigma=0.5, mattype=asarray, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 1.132e+10, -5.636e-01], [ -7.088e+09, 2.391e-01], [ 3.011e+08, 4.286e-01],... y: array([[ 2.432e+11, -5.636e-01], [ -1.524e+11, 2.391e-01], [ 6.310e+09, 4.286e-01],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:standard, typ=f, which=SA, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ -3.695e+04, -3.538e-01], [ -9.398e+04, 1.602e-01], [ 2.773e+04, -1.852e-01],... y: array([[ -1.161e+06, -3.538e-01], [ -3.055e+06, 1.602e-01], [ 3.433e+05, -1.852e-01],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:standard, typ=f, which=SA, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=normal (mismatch 100.0%) x: array([[ 0.117, -0.238], [-0.332, 0.108], [-0.111, -0.125],... y: array([[ 0.234, -0.238], [-0.159, 0.108], [ 0.009, -0.125],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:standard, typ=f, which=SA, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[ 0.191, -0.238], [-0.203, 0.108], [-0.019, -0.125],... y: array([[ 0.237, -0.238], [-0.145, 0.108], [-0.006, -0.125],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.000357628, atol=0.000357628 error for eigsh:standard, typ=f, which=SA, sigma=0.5, mattype=asarray, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 5.418e+04, -3.538e-01], [ 1.469e+05, 1.602e-01], [ 3.020e+04, -1.852e-01],... y: array([[ 1.874e+06, -3.538e-01], [ 5.017e+06, 1.602e-01], [ -9.680e+04, -1.852e-01],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=LM, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=normal (mismatch 100.0%) x: array([[ -2.382e-01, 9.572e+07], [ 1.079e-01, 2.131e+08], [ -1.247e-01, 1.680e+08],... y: array([[ -2.382e-01, -5.330e+06], [ 1.079e-01, 2.797e+07], [ -1.247e-01, 1.429e+07],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=LM, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ -3.538e-01, 1.375e+03], [ 1.602e-01, 4.184e+03], [ -1.852e-01, 2.421e+03],... y: array([[ -3.538e-01, 5.666e+04], [ 1.602e-01, 1.535e+05], [ -1.852e-01, 6.859e+03],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=LM, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[ 2.382e-01, 2.461e+07], [ -1.079e-01, 9.884e+07], [ 1.247e-01, 8.637e+07],... y: array([[ 2.382e-01, -1.013e+07], [ -1.079e-01, 1.094e+07], [ 1.247e-01, 1.851e+07],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=LM, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=normal (mismatch 100.0%) x: array([[ -2.382e-01, 2.200e+08], [ 1.079e-01, -3.070e+06], [ -1.247e-01, -7.479e+07],... y: array([[ -2.382e-01, 1.143e+07], [ 1.079e-01, -1.551e+08], [ -1.247e-01, -1.689e+07],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=LM, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 3.538e-01, 3.243e+04], [ -1.602e-01, 8.056e+04], [ 1.852e-01, -3.566e+04],... y: array([[ 3.538e-01, 9.989e+05], [ -1.602e-01, 2.616e+06], [ 1.852e-01, -3.621e+05],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=LM, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[ 2.382e-01, 9.366e+09], [ -1.079e-01, 1.463e+10], [ 1.247e-01, 1.174e+10],... y: array([[ 2.382e-01, -2.002e+09], [ -1.079e-01, -3.654e+09], [ 1.247e-01, 6.999e+09],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=LM, sigma=0.5, mattype=asarray, OPpart=None, mode=normal (mismatch 100.0%) x: array([[ -2.382e-01, 7.501e+08], [ 1.079e-01, 9.644e+08], [ -1.247e-01, 6.515e+08],... y: array([[ -2.382e-01, 4.634e+07], [ 1.079e-01, 4.662e+07], [ -1.247e-01, 2.463e+07],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=LM, sigma=0.5, mattype=asarray, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ -3.538e-01, 3.473e+05], [ 1.602e-01, 9.216e+05], [ -1.852e-01, -7.107e+04],... y: array([[ -3.538e-01, 1.175e+07], [ 1.602e-01, 3.116e+07], [ -1.852e-01, -2.130e+06],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=SM, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 3.328e-02, 1.071e+05], [ 8.831e-02, -1.551e+05], [ -5.866e-03, -2.482e+05],... y: array([[ 3.328e-02, 1.585e+04], [ 8.831e-02, -2.406e+04], [ -5.866e-03, -4.011e+04],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=SM, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[ -3.875e-03, 6.606e+10], [ -1.028e-02, 1.908e+11], [ 6.831e-04, -1.002e+12],... y: array([[ -3.875e-03, 1.753e+13], [ -1.028e-02, 2.231e+13], [ 6.831e-04, -1.424e+14],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=SM, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ -3.328e-02, -1.334e+06], [ -8.831e-02, 9.205e+05], [ 5.866e-03, 2.179e+05],... y: array([[ -3.328e-02, -2.090e+05], [ -8.831e-02, 1.448e+05], [ 5.866e-03, 3.509e+04],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=SM, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[ -3.875e-03, -2.959e+01], [ -1.028e-02, 7.191e+00], [ 6.831e-04, -1.718e+01],... y: array([[ -3.875e-03, -5.752e+02], [ -1.028e-02, 2.521e+02], [ 6.831e-04, -3.090e+02],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=SM, sigma=0.5, mattype=asarray, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 3.328e-02, -2.749e+06], [ 8.831e-02, 1.344e+06], [ -5.866e-03, -1.148e+06],... y: array([[ 3.328e-02, -4.389e+05], [ 8.831e-02, 2.145e+05], [ -5.866e-03, -1.843e+05],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=SM, sigma=0.5, mattype=asarray, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[ 3.875e-03, 1.086e+07], [ 1.028e-02, 1.298e+07], [ -6.831e-04, -8.457e+07],... y: array([[ 3.875e-03, 1.455e+09], [ 1.028e-02, 1.775e+09], [ -6.831e-04, -1.149e+10],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=SA, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=normal (mismatch 100.0%) x: array([[ 8.851e+08, -2.382e-01], [ 1.052e+09, 1.079e-01], [ 6.852e+08, -1.247e-01],... y: array([[ 7.544e+07, -2.382e-01], [ 5.853e+07, 1.079e-01], [ 7.238e+06, -1.247e-01],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=SA, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 1.032e+05, 3.538e-01], [ 2.180e+05, -1.602e-01], [ -5.973e+04, 1.852e-01],... y: array([[ 2.852e+06, 3.538e-01], [ 7.514e+06, -1.602e-01], [ -7.013e+05, 1.852e-01],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=SA, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=normal (mismatch 100.0%) x: array([[ -3.198e+09, 2.382e-01], [ -4.772e+09, -1.079e-01], [ -3.402e+09, 1.247e-01],... y: array([[ -5.317e+07, 2.382e-01], [ -1.956e+08, -1.079e-01], [ -1.867e+08, 1.247e-01],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=SA, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 4.762e+04, 3.538e-01], [ 1.267e+05, -1.602e-01], [ 3.473e+03, 1.852e-01],... y: array([[ 1.618e+06, 3.538e-01], [ 4.307e+06, -1.602e-01], [ -2.144e+05, 1.852e-01],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=SA, sigma=0.5, mattype=asarray, OPpart=None, mode=normal (mismatch 100.0%) x: array([[ -2.967e+08, -2.382e-01], [ -4.687e+08, 1.079e-01], [ -3.395e+08, -1.247e-01],... y: array([[ 8.006e+05, -2.382e-01], [ -1.710e+07, 1.079e-01], [ -1.666e+07, -1.247e-01],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=SA, sigma=0.5, mattype=asarray, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 8.264e+04, 3.538e-01], [ 2.138e+05, -1.602e-01], [ -4.002e+04, 1.852e-01],... y: array([[ 2.707e+06, 3.538e-01], [ 7.154e+06, -1.602e-01], [ -6.331e+05, 1.852e-01],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:standard, typ=d, which=SA, sigma=0.5, mattype=asarray, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[ -2.761e+07, -2.382e-01], [ -7.341e+07, 1.079e-01], [ -6.082e+07, -1.247e-01],... y: array([[ 4.756e+06, -2.382e-01], [ -7.514e+06, 1.079e-01], [ -1.193e+07, -1.247e-01],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'LM', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.000357628, atol=0.000357628 error for eigsh:general, typ=f, which=LM, sigma=0.5, mattype=asarray, OPpart=None, mode=normal (mismatch 100.0%) x: array([[ 1.936e-02, 3.841e+10], [ 1.105e-01, -2.225e+11], [ 1.322e-01, -1.655e+11],... y: array([[ 1.936e-02, 5.095e+08], [ 1.105e-01, -2.041e+09], [ 1.322e-01, -1.589e+09],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:general, typ=f, which=SM, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ -1.094e-01, 2.043e+05], [ -7.154e-02, 2.625e+05], [ 6.895e-02, 5.477e+04],... y: array([[ -1.094e-01, 2.014e+03], [ -7.154e-02, 3.819e+03], [ 6.895e-02, -2.607e+02],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SM', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.000357628, atol=0.000357628 error for eigsh:general, typ=f, which=SM, sigma=0.5, mattype=asarray, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 1.094e-01, 2.303e+07], [ 7.154e-02, 1.349e+07], [ -6.895e-02, 1.912e+07],... y: array([[ 1.094e-01, 8.253e+06], [ 7.154e-02, 4.796e+06], [ -6.895e-02, 6.893e+06],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:general, typ=f, which=SA, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=normal (mismatch 100.0%) x: array([[ 9.049e+07, -1.936e-02], [ -5.051e+08, -1.105e-01], [ -3.773e+08, -1.322e-01],... y: array([[ 1.278e+06, -1.936e-02], [ -4.495e+06, -1.105e-01], [ -3.590e+06, -1.322e-01],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:general, typ=f, which=SA, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 2.863, 0.055], [ 2.826, 0.314], [ 1.957, 0.375],... y: array([[ 1.105, 0.055], [ 0.749, 0.314], [ 0.823, 0.375],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:general, typ=f, which=SA, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[ 0.509, 0.019], [ 0.079, 0.111], [ 0.215, 0.132],... y: array([[ 0.441, 0.019], [ 0.255, 0.111], [ 0.366, 0.132],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.00178814, atol=0.000357628 error for eigsh:general, typ=f, which=SA, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[-1.217, -0.019], [-1.067, -0.111], [-0.752, -0.132],... y: array([[-0.547, -0.019], [-0.353, -0.111], [-0.428, -0.132],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.000357628, atol=0.000357628 error for eigsh:general, typ=f, which=SA, sigma=0.5, mattype=asarray, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 1.331, 0.055], [ 1.114, 0.314], [ 0.955, 0.375],... y: array([[ 0.834, 0.055], [ 0.511, 0.314], [ 0.672, 0.375],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'f', 2, 'SA', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.000357628, atol=0.000357628 error for eigsh:general, typ=f, which=SA, sigma=0.5, mattype=asarray, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[-0.289, -0.019], [-0.055, -0.111], [-0.246, -0.132],... y: array([[-0.413, -0.019], [-0.231, -0.111], [-0.353, -0.132],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:general, typ=d, which=LM, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=normal (mismatch 100.0%) x: array([[-0.019, 0.162], [-0.111, 0.267], [-0.132, -0.033],... y: array([[-0.019, -0.322], [-0.111, -0.156], [-0.132, -0.302],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:general, typ=d, which=LM, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=normal (mismatch 100.0%) x: array([[ 0.019, -0.44 ], [ 0.111, -0.255], [ 0.132, -0.368],... y: array([[ 0.019, -0.44 ], [ 0.111, -0.255], [ 0.132, -0.368],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'LM', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:general, typ=d, which=LM, sigma=0.5, mattype=asarray, OPpart=None, mode=normal (mismatch 100.0%) x: array([[-0.019, -0.441], [-0.111, -0.251], [-0.132, -0.365],... y: array([[-0.019, -0.44 ], [-0.111, -0.255], [-0.132, -0.368],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:general, typ=d, which=SM, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 1.094e-01, -1.003e+11], [ 7.154e-02, -5.602e+10], [ -6.895e-02, -8.474e+10],... y: array([[ 1.094e-01, -3.758e+10], [ 7.154e-02, -2.174e+10], [ -6.895e-02, -3.148e+10],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:general, typ=d, which=SM, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ -1.094e-01, -1.145e+06], [ -7.154e-02, -4.563e+05], [ 6.895e-02, -1.161e+06],... y: array([[ -1.094e-01, -6.119e+05], [ -7.154e-02, -3.431e+05], [ 6.895e-02, -5.238e+05],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SM', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:general, typ=d, which=SM, sigma=0.5, mattype=asarray, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 1.094e-01, -9.523e+05], [ 7.154e-02, -5.036e+05], [ -6.895e-02, -8.324e+05],... y: array([[ 1.094e-01, -3.638e+05], [ 7.154e-02, -2.108e+05], [ -6.895e-02, -3.041e+05],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:general, typ=d, which=SA, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=normal (mismatch 100.0%) x: array([[ 2.055, -0.019], [ 1.723, -0.111], [ 1.222, -0.132],... y: array([[ 0.729, -0.019], [ 0.504, -0.111], [ 0.528, -0.132],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:general, typ=d, which=SA, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 1.38 , 0.055], [ 2.191, 0.314], [ 2.178, 0.375],... y: array([[ 1.007, 0.055], [ 0.649, 0.314], [ 0.794, 0.375],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:general, typ=d, which=SA, sigma=0.5, mattype=csr_matrix, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[-0.202, -0.019], [ 0.163, -0.111], [-0.129, -0.132],... y: array([[-0.402, -0.019], [-0.219, -0.111], [-0.347, -0.132],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:general, typ=d, which=SA, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=normal (mismatch 100.0%) x: array([[-0.44 , -0.019], [-0.254, -0.111], [-0.367, -0.132],... y: array([[-0.44 , -0.019], [-0.255, -0.111], [-0.368, -0.132],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:general, typ=d, which=SA, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 0.753, 0.055], [ 0.437, 0.314], [ 0.633, 0.375],... y: array([[ 0.755, 0.055], [ 0.438, 0.314], [ 0.632, 0.375],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:general, typ=d, which=SA, sigma=0.5, mattype=aslinearoperator, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[-2.302, -0.019], [-2.784, -0.111], [-1.823, -0.132],... y: array([[-0.754, -0.019], [-0.539, -0.111], [-0.542, -0.132],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'normal') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:general, typ=d, which=SA, sigma=0.5, mattype=asarray, OPpart=None, mode=normal (mismatch 100.0%) x: array([[ 2.550e+06, -1.936e-02], [ -1.401e+07, -1.105e-01], [ -1.048e+07, -1.322e-01],... y: array([[ 3.767e+04, -1.936e-02], [ -1.225e+05, -1.105e-01], [ -9.917e+04, -1.322e-01],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'buckling') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:general, typ=d, which=SA, sigma=0.5, mattype=asarray, OPpart=None, mode=buckling (mismatch 100.0%) x: array([[ 2.924, 0.055], [-12.989, 0.314], [ -9.288, 0.375],... y: array([[ 0.777, 0.055], [ 0.306, 0.314], [ 0.534, 0.375],... ====================================================================== FAIL: test_arpack.test_symmetric_modes(True, , 'd', 2, 'SA', None, 0.5, , None, 'cayley') ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 249, in eval_evec assert_allclose(LHS, RHS, rtol=rtol, atol=atol, err_msg=err) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 1178, in assert_allclose verbose=verbose, header=header) File "/usr/local/lib/python2.7/dist-packages/numpy/testing/utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=4.44089e-13, atol=4.44089e-13 error for eigsh:general, typ=d, which=SA, sigma=0.5, mattype=asarray, OPpart=None, mode=cayley (mismatch 100.0%) x: array([[ -1.044e+01, -1.936e-02], [ 3.246e+01, -1.105e-01], [ 2.682e+01, -1.322e-01],... y: array([[ 0.092, -0.019], [ 0.276, -0.111], [ 0.555, -0.132],... ---------------------------------------------------------------------- Ran 5483 tests in 76.694s FAILED (KNOWNFAIL=13, SKIP=24, errors=8, failures=66) Out[3]: From helmrp at yahoo.com Thu Jul 26 22:33:25 2012 From: helmrp at yahoo.com (The Helmbolds) Date: Thu, 26 Jul 2012 19:33:25 -0700 (PDT) Subject: [SciPy-User] Brentq Message-ID: <1343356405.91317.YahooMailNeo@web31803.mail.mud.yahoo.com> The user guide says the algorithm for brentq should be obvious from inspecting the code. Unfortunately, I can't find the?code. The entire code in the definition of the brentq function (located in the zeros section of scipy.optimize) reads: ? ??????? if type(args) != type(()) : ??????????? args = (args,) ??? ????r = _zeros._brentq(f,a,b,xtol,maxiter,args,full_output,disp) So it just calls _zeros._brentq. The module in which the brentq function is defined imports _zeros. I searched but the only?_zeros file I could find in scipy.optimize is a _zeros.pyd file, and that's a DLL file, not a Python code object. So I'm at a loss as to where the brentq code is located. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hasslerjc at comcast.net Thu Jul 26 23:00:33 2012 From: hasslerjc at comcast.net (John Hassler) Date: Thu, 26 Jul 2012 23:00:33 -0400 Subject: [SciPy-User] Brentq In-Reply-To: <1343356405.91317.YahooMailNeo@web31803.mail.mud.yahoo.com> References: <1343356405.91317.YahooMailNeo@web31803.mail.mud.yahoo.com> Message-ID: <50120451.8010903@comcast.net> An HTML attachment was scrubbed... URL: From L.Ulferts at hs-osnabrueck.de Thu Jul 26 23:00:47 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 27 Jul 2012 03:00:47 -0000 Subject: [SciPy-User] ***-SPAM-*** Re: Brentq Message-ID: <20120727030047.22309.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From joonpyro at gmail.com Thu Jul 26 23:02:47 2012 From: joonpyro at gmail.com (Joon Ro) Date: Thu, 26 Jul 2012 22:02:47 -0500 Subject: [SciPy-User] Brentq In-Reply-To: <50120451.8010903@comcast.net> References: <1343356405.91317.YahooMailNeo@web31803.mail.mud.yahoo.com> <50120451.8010903@comcast.net> Message-ID: <501204D7.3060204@gmail.com> On Thu 26 Jul 2012 10:00:33 PM CDT, John Hassler wrote: > > On 7/26/2012 10:33 PM, The Helmbolds wrote: >> The user guide says the algorithm for brentq should be obvious from >> inspecting the code. >> Unfortunately, I can't find the code. >> The entire code in the definition of the brentq function (located in >> the zeros section of scipy.optimize) reads: >> if type(args) != type(()) : >> args = (args,) >> r = _zeros._brentq(f,a,b,xtol,maxiter,args,full_output,disp) >> >> So it just calls _zeros._brentq. >> The module in which the brentq function is defined imports _zeros. >> I searched but the only _zeros file I could find in scipy.optimize is >> a _zeros.pyd file, >> and that's a DLL file, not a Python code object. >> So I'm at a loss as to where the brentq code is located. It seems the c source code is here: https://github.com/scipy/scipy/blob/master/scipy/optimize/Zeros/brentq.c -Joon From L.Ulferts at hs-osnabrueck.de Thu Jul 26 23:03:12 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 27 Jul 2012 03:03:12 -0000 Subject: [SciPy-User] Brentq Message-ID: <20120727030312.23547.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From hasslerjc at comcast.net Thu Jul 26 23:11:13 2012 From: hasslerjc at comcast.net (John Hassler) Date: Thu, 26 Jul 2012 23:11:13 -0400 Subject: [SciPy-User] Brentq In-Reply-To: <501204D7.3060204@gmail.com> References: <1343356405.91317.YahooMailNeo@web31803.mail.mud.yahoo.com> <50120451.8010903@comcast.net> <501204D7.3060204@gmail.com> Message-ID: <501206D1.1060808@comcast.net> On 7/26/2012 11:02 PM, Joon Ro wrote: > On Thu 26 Jul 2012 10:00:33 PM CDT, John Hassler wrote: >> On 7/26/2012 10:33 PM, The Helmbolds wrote: >>> The user guide says the algorithm for brentq should be obvious from >>> inspecting the code. >>> Unfortunately, I can't find the code. >>> The entire code in the definition of the brentq function (located in >>> the zeros section of scipy.optimize) reads: >>> if type(args) != type(()) : >>> args = (args,) >>> r = _zeros._brentq(f,a,b,xtol,maxiter,args,full_output,disp) >>> >>> So it just calls _zeros._brentq. >>> The module in which the brentq function is defined imports _zeros. >>> I searched but the only _zeros file I could find in scipy.optimize is >>> a _zeros.pyd file, >>> and that's a DLL file, not a Python code object. >>> So I'm at a loss as to where the brentq code is located. > It seems the c source code is here: > > https://github.com/scipy/scipy/blob/master/scipy/optimize/Zeros/brentq.c > > -Joon > _______________________________________________ > Wikipedia has a nice exposition - also with a c-code example: http://en.wikipedia.org/wiki/Brent%27s_method john From L.Ulferts at hs-osnabrueck.de Thu Jul 26 23:11:18 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 27 Jul 2012 03:11:18 -0000 Subject: [SciPy-User] ***-SPAM-*** Re: Brentq Message-ID: <20120727031118.26907.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From charlesr.harris at gmail.com Thu Jul 26 23:46:11 2012 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 26 Jul 2012 21:46:11 -0600 Subject: [SciPy-User] Brentq In-Reply-To: <1343356405.91317.YahooMailNeo@web31803.mail.mud.yahoo.com> References: <1343356405.91317.YahooMailNeo@web31803.mail.mud.yahoo.com> Message-ID: On Thu, Jul 26, 2012 at 8:33 PM, The Helmbolds wrote: > The user guide says the algorithm for brentq should be obvious from > inspecting the code. > Unfortunately, I can't find the code. > The entire code in the definition of the brentq function (located in the > zeros section of scipy.optimize) reads: > > if type(args) != type(()) : > args = (args,) > r = _zeros._brentq(f,a,b,xtol,maxiter,args,full_output,disp) > > So it just calls _zeros._brentq. > The module in which the brentq function is defined imports _zeros. > I searched but the only _zeros file I could find in scipy.optimize is a > _zeros.pyd file, > and that's a DLL file, not a Python code object. > So I'm at a loss as to where the brentq code is located. > I wrote the code, but I would never call it obvious :0 It's rather subtle, and not as clear in its structured form as when written out with gotos since it is best understood as a state machine. The inverse quadratic interpolation isn't the subtle part, it's the control. Note that the form of the interpolation isn't that found in the original paper. Chuck > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.Ulferts at hs-osnabrueck.de Thu Jul 26 23:46:28 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 27 Jul 2012 03:46:28 -0000 Subject: [SciPy-User] Brentq Message-ID: <20120727034628.10489.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From scott.sinclair.za at gmail.com Fri Jul 27 05:20:02 2012 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Fri, 27 Jul 2012 11:20:02 +0200 Subject: [SciPy-User] Scipy test errors In-Reply-To: <5011A505.2020209@sun.ac.za> References: <50119FDC.70704@sun.ac.za> <5011A505.2020209@sun.ac.za> Message-ID: On 26 July 2012 22:13, Johann Rohwer wrote: > On 26/07/2012 21:57, Ralf Gommers wrote: >> >> >> On Thu, Jul 26, 2012 at 9:51 PM, Johann Rohwer > > wrote: >> >> The latest SciPy from GIT gives a whole bunch of test errors and failures, >> most of the in the test_arpack module: >> >> Ran 5483 tests in 78.064s >> >> FAILED (KNOWNFAIL=13, SKIP=24, errors=8, failures=63) >> Out[4]: >> >> Is this to be expected? The official release 0.11.0rc1 gives slightly fewer, >> but still a significant number of failures (57). >> >> >> That's not expected. Could you please send us the output of the test run? >> And with what command did you install, and what Python version? >> >> Ralf > Test output below, version info included. Installation was with > $ python setup.py build > $ sudo python setup.py install > > I made sure to clean out the build directory first, and remove any scipy* > files/dirs from the dist-packages directory. > > > ------------------------ test output below ------------------------------- > > In [3]: scipy.test() > Running unit tests for scipy > NumPy version 1.8.0.dev-b74c6be > NumPy is installed in /usr/local/lib/python2.7/dist-packages/numpy > SciPy version 0.12.0.dev-8e918cd > SciPy is installed in /usr/local/lib/python2.7/dist-packages/scipy > Python version 2.7.3 (default, Apr 20 2012, 22:39:59) [GCC 4.6.3] > nose version 1.1.2 > ---------------------------------------------------------------------- > Ran 5483 tests in 76.694s > > FAILED (KNOWNFAIL=13, SKIP=24, errors=8, failures=66) > Out[3]: On the same platform I see the 8 errors (RuntimeWarning: overflow encountered in nbdtrik), but none of the failures. Some of the failures you see might be caused by running the tests multiple times in the same Python process. What happens if you run the following from the command line instead of inside an IPython session? $ python -c "import scipy; scipy.test()" Here is my test output: Running unit tests for scipy NumPy version 1.8.0.dev-b74c6be NumPy is installed in /home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/numpy SciPy version 0.12.0.dev-8e918cd SciPy is installed in /home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy Python version 2.7.3 (default, Apr 20 2012, 22:39:59) [GCC 4.6.3] nose version 1.1.2 ..............................................................................................................................................................................................................................K........................................................................................................K..................................................................K..K......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................SSSSSS......SSSSSS......SSSS..............................................................................................................................................................................................................................................................................................................................................................................K.............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................SSSSSSSSSSS.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................K..................................................................K................................................................................................................................................................KK.......................................................................................................................................................................................................................................................................................................................................................................................................................................................K.K.............................................................................................................................................................................................................................................................................................................................................................................................K........K..............SSSSSSS..............................................................................................................................EEEE....EEEE..................................S.................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. ====================================================================== ERROR: test_discrete_basic.test_discrete_basic(, (5, 0.5), 'nbinom cdf_ppf') ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/scott/.local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/tests/test_discrete_basic.py", line 104, in check_cdf_ppf ppf05 = distfn.ppf(0.5,*arg) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/distributions.py", line 6132, in ppf place(output,cond,self._ppf(*goodargs) + loc) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/distributions.py", line 6633, in _ppf vals = ceil(special.nbdtrik(q,n,p)) RuntimeWarning: overflow encountered in nbdtrik ====================================================================== ERROR: test_discrete_basic.test_discrete_basic(, (5, 0.5), array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 21]), 'nbinom cdf_ppf') ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/scott/.local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/tests/test_discrete_basic.py", line 111, in check_cdf_ppf2 npt.assert_array_equal(distfn.ppf(distfn.cdf(supp,*arg),*arg), File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/distributions.py", line 6132, in ppf place(output,cond,self._ppf(*goodargs) + loc) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/distributions.py", line 6633, in _ppf vals = ceil(special.nbdtrik(q,n,p)) RuntimeWarning: overflow encountered in nbdtrik ====================================================================== ERROR: test_discrete_basic.test_discrete_basic(, (5, 0.5), 'nbinom pmf_cdf') ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/scott/.local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/tests/test_discrete_basic.py", line 136, in check_pmf_cdf startind = np.int(distfn._ppf(0.01,*arg)-1) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/distributions.py", line 6633, in _ppf vals = ceil(special.nbdtrik(q,n,p)) RuntimeWarning: overflow encountered in nbdtrik ====================================================================== ERROR: test_discrete_basic.test_discrete_basic(, (5, 0.5), 'nbinom oth') ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/scott/.local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/tests/test_discrete_basic.py", line 156, in check_oth median_sf = distfn.isf(0.5, *arg) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/distributions.py", line 6184, in isf place(output,cond,self._isf(*goodargs) + loc) #PB same as ticket 766 File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/distributions.py", line 5828, in _isf return self._ppf(1-q,*args) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/distributions.py", line 6633, in _ppf vals = ceil(special.nbdtrik(q,n,p)) RuntimeWarning: overflow encountered in nbdtrik ====================================================================== ERROR: test_discrete_basic.test_discrete_basic(, (0.4, 0.4), 'nbinom cdf_ppf') ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/scott/.local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/tests/test_discrete_basic.py", line 104, in check_cdf_ppf ppf05 = distfn.ppf(0.5,*arg) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/distributions.py", line 6132, in ppf place(output,cond,self._ppf(*goodargs) + loc) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/distributions.py", line 6633, in _ppf vals = ceil(special.nbdtrik(q,n,p)) RuntimeWarning: overflow encountered in nbdtrik ====================================================================== ERROR: test_discrete_basic.test_discrete_basic(, (0.4, 0.4), array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12]), 'nbinom cdf_ppf') ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/scott/.local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/tests/test_discrete_basic.py", line 111, in check_cdf_ppf2 npt.assert_array_equal(distfn.ppf(distfn.cdf(supp,*arg),*arg), File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/distributions.py", line 6132, in ppf place(output,cond,self._ppf(*goodargs) + loc) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/distributions.py", line 6633, in _ppf vals = ceil(special.nbdtrik(q,n,p)) RuntimeWarning: overflow encountered in nbdtrik ====================================================================== ERROR: test_discrete_basic.test_discrete_basic(, (0.4, 0.4), 'nbinom pmf_cdf') ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/scott/.local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/tests/test_discrete_basic.py", line 136, in check_pmf_cdf startind = np.int(distfn._ppf(0.01,*arg)-1) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/distributions.py", line 6633, in _ppf vals = ceil(special.nbdtrik(q,n,p)) RuntimeWarning: overflow encountered in nbdtrik ====================================================================== ERROR: test_discrete_basic.test_discrete_basic(, (0.4, 0.4), 'nbinom oth') ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/scott/.local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/tests/test_discrete_basic.py", line 156, in check_oth median_sf = distfn.isf(0.5, *arg) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/distributions.py", line 6184, in isf place(output,cond,self._isf(*goodargs) + loc) #PB same as ticket 766 File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/distributions.py", line 5828, in _isf return self._ppf(1-q,*args) File "/home/scott/.virtualenvs/scipy-sandbox/local/lib/python2.7/site-packages/scipy/stats/distributions.py", line 6633, in _ppf vals = ceil(special.nbdtrik(q,n,p)) RuntimeWarning: overflow encountered in nbdtrik ---------------------------------------------------------------------- Ran 5483 tests in 54.123s FAILED (KNOWNFAIL=13, SKIP=35, errors=8) Cheers, Scott From L.Ulferts at hs-osnabrueck.de Fri Jul 27 05:21:21 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 27 Jul 2012 09:21:21 -0000 Subject: [SciPy-User] Scipy test errors Message-ID: <20120727092121.26967.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From jr at sun.ac.za Fri Jul 27 06:23:56 2012 From: jr at sun.ac.za (Johann Rohwer) Date: Fri, 27 Jul 2012 12:23:56 +0200 Subject: [SciPy-User] Scipy test errors In-Reply-To: References: <50119FDC.70704@sun.ac.za> <5011A505.2020209@sun.ac.za> Message-ID: <9667757.D0MLziplea@kruimel> On Friday 27 July 2012 11:20:02 Scott Sinclair wrote: > On the same platform I see the 8 errors (RuntimeWarning: overflow > encountered in nbdtrik), but none of the failures. Some of the > failures you see might be caused by running the tests multiple times > in the same Python process. What happens if you run the following > from the command line instead of inside an IPython session? > > $ python -c "import scipy; scipy.test()" Makes no difference, I still get all the failures listed previously. I had run the tests from within IPython, but for every new test I restarted the IPython session, so the tests were definitely not run multiple times in the same Python process. I also tried compiling the scipy against stable (1.6.2) numpy, but it makes no difference. I've reverted to scipy 0.10.1 which gives me no errors and no failures. Can't really understand why it works fine in your case. --Johann E-pos vrywaringsklousule Hierdie e-pos mag vertroulike inligting bevat en mag regtens geprivilegeerd wees en is slegs bedoel vir die persoon aan wie dit geadresseer is. Indien u nie die bedoelde ontvanger is nie, word u hiermee in kennis gestel dat u hierdie dokument geensins mag gebruik, versprei of kopieer nie. Stel ook asseblief die sender onmiddellik per telefoon in kennis en vee die e-pos uit. Die Universiteit aanvaar nie aanspreeklikheid vir enige skade, verlies of uitgawe wat voortspruit uit hierdie e-pos en/of die oopmaak van enige l??s aangeheg by hierdie e-pos nie. E-mail disclaimer This e-mail may contain confidential information and may be legally privileged and is intended only for the person to whom it is addressed. If you are not the intended recipient, you are notified that you may not use, distribute or copy this document in any manner whatsoever. Kindly also notify the sender immediately by telephone, and delete the e-mail. The University does not accept liability for any damage, loss or expense arising from this e-mail and/or accessing any files attached to this e-mail. From scott.sinclair.za at gmail.com Fri Jul 27 06:36:25 2012 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Fri, 27 Jul 2012 12:36:25 +0200 Subject: [SciPy-User] Scipy test errors In-Reply-To: <9667757.D0MLziplea@kruimel> References: <50119FDC.70704@sun.ac.za> <5011A505.2020209@sun.ac.za> <9667757.D0MLziplea@kruimel> Message-ID: On 27 July 2012 12:23, Johann Rohwer wrote: > On Friday 27 July 2012 11:20:02 Scott Sinclair wrote: >> On the same platform I see the 8 errors (RuntimeWarning: overflow >> encountered in nbdtrik), but none of the failures. Some of the >> failures you see might be caused by running the tests multiple times >> in the same Python process. What happens if you run the following >> from the command line instead of inside an IPython session? >> >> $ python -c "import scipy; scipy.test()" > > Makes no difference, I still get all the failures listed previously. Then I find it strange that you were seeing failures like "AssertionError: DeprecationWarning not raised" and "AssertionError: BadCoefficients not raised"? > I've reverted to scipy 0.10.1 which gives me no errors and no > failures. Can't really understand why it works fine in your case. Something to worry about some other time then. Cheers, Scott From L.Ulferts at hs-osnabrueck.de Fri Jul 27 06:36:54 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 27 Jul 2012 10:36:54 -0000 Subject: [SciPy-User] Scipy test errors Message-ID: <20120727103654.31355.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From sturla at molden.no Fri Jul 27 07:45:38 2012 From: sturla at molden.no (Sturla Molden) Date: Fri, 27 Jul 2012 13:45:38 +0200 Subject: [SciPy-User] Scipy test errors In-Reply-To: <20120727103654.31355.qmail@helios.ze.fh-osnabrueck.de> References: <20120727103654.31355.qmail@helios.ze.fh-osnabrueck.de> Message-ID: <0FC9E8E7-98AA-4ADD-94CE-F9520BD68324@molden.no> Admin, please get Lothar off the list? Sturla Sendt fra min iPad Den 27. juli 2012 kl. 12:36 skrev L.Ulferts at hs-osnabrueck.de: > Ich befinde mich im Urlaub. > Ihre Nachricht wird nicht weitergereicht, > sondern ab dem 14. August von mir pers?nlich > bearbeitet werden. In dringenden F?llen k?nnen > Sie sich an meine Kollegen Herrn G?nterberg und > Roetmann oder meinen Fachvorgesetzten Herrn Taeger > wenden. > > Lothar Ulferts > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From L.Ulferts at hs-osnabrueck.de Fri Jul 27 07:45:46 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 27 Jul 2012 11:45:46 -0000 Subject: [SciPy-User] Scipy test errors Message-ID: <20120727114546.25826.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From robert.kern at gmail.com Fri Jul 27 07:51:44 2012 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 27 Jul 2012 12:51:44 +0100 Subject: [SciPy-User] Scipy test errors In-Reply-To: <0FC9E8E7-98AA-4ADD-94CE-F9520BD68324@molden.no> References: <20120727103654.31355.qmail@helios.ze.fh-osnabrueck.de> <0FC9E8E7-98AA-4ADD-94CE-F9520BD68324@molden.no> Message-ID: 2012/7/27 Sturla Molden : > Admin, please get Lothar off the list? Admin is also on vacation. May take some time. Please have patience and/or set up your personal email filters to block his automated emails. Thank you. -- Robert Kern From L.Ulferts at hs-osnabrueck.de Fri Jul 27 07:53:38 2012 From: L.Ulferts at hs-osnabrueck.de (L.Ulferts at hs-osnabrueck.de) Date: 27 Jul 2012 11:53:38 -0000 Subject: [SciPy-User] Scipy test errors Message-ID: <20120727115338.31604.qmail@helios.ze.fh-osnabrueck.de> Ich befinde mich im Urlaub. Ihre Nachricht wird nicht weitergereicht, sondern ab dem 14. August von mir pers?nlich bearbeitet werden. In dringenden F?llen k?nnen Sie sich an meine Kollegen Herrn G?nterberg und Roetmann oder meinen Fachvorgesetzten Herrn Taeger wenden. Lothar Ulferts From helmrp at yahoo.com Fri Jul 27 08:26:08 2012 From: helmrp at yahoo.com (Robaula) Date: Fri, 27 Jul 2012 05:26:08 -0700 Subject: [SciPy-User] Brentq In-Reply-To: <50120451.8010903@comcast.net> References: <1343356405.91317.YahooMailNeo@web31803.mail.mud.yahoo.com> <50120451.8010903@comcast.net> Message-ID: <12F29036-078E-4093-AF22-A1D1F22C789A@yahoo.com> Regrettably, this just repeats what is in the SciPy user guide, including the assertion that "it should be easy to understand the algorithm just by reading our code." But your reference contains no lines of code. Besides, I'm of the opinion that documentation that refers users to the code is unsatisfactory. On Jul 26, 2012, at 8:00 PM, John Hassler wrote: > > On 7/26/2012 10:33 PM, The Helmbolds wrote: >> The user guide says the algorithm for brentq should be obvious from inspecting the code. >> Unfortunately, I can't find the code. >> The entire code in the definition of the brentq function (located in the zeros section of scipy.optimize) reads: >> >> if type(args) != type(()) : >> args = (args,) >> r = _zeros._brentq(f,a,b,xtol,maxiter,args,full_output,disp) >> >> So it just calls _zeros._brentq. >> The module in which the brentq function is defined imports _zeros. >> I searched but the only _zeros file I could find in scipy.optimize is a _zeros.pyd file, >> and that's a DLL file, not a Python code object. >> So I'm at a loss as to where the brentq code is located. >> >> > Maybe this? > http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.brentq.html > john > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Jul 27 08:26:03 2012 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 27 Jul 2012 13:26:03 +0100 Subject: [SciPy-User] Scipy test errors In-Reply-To: References: <20120727103654.31355.qmail@helios.ze.fh-osnabrueck.de> <0FC9E8E7-98AA-4ADD-94CE-F9520BD68324@molden.no> Message-ID: On Fri, Jul 27, 2012 at 12:51 PM, Robert Kern wrote: > 2012/7/27 Sturla Molden : >> Admin, please get Lothar off the list? > > Admin is also on vacation. May take some time. Please have patience > and/or set up your personal email filters to block his automated > emails. > > Thank you. It should be taken care of now. -- Robert Kern From helmrp at yahoo.com Fri Jul 27 09:29:30 2012 From: helmrp at yahoo.com (Robaula) Date: Fri, 27 Jul 2012 06:29:30 -0700 Subject: [SciPy-User] Brentq In-Reply-To: References: <1343356405.91317.YahooMailNeo@web31803.mail.mud.yahoo.com> Message-ID: <35A71BE4-6703-4C19-87DA-E08D82DC4529@yahoo.com> The original question had to do with whether the code could produce a value slightly to the left of the bracketing interval. I never had to learn C, and don't like it. But I did pull up the C code, thanks to Joon's pointer to it. I noticed that your comments mention that "the order of xa [the current estimate] and xp [the previous estimate] doesn't matter". Does this mean that xp could be less than xa? If so, then I suppose it could fall slightly to the left of the bracketing interval [xa, xb]? On Jul 26, 2012, at 8:46 PM, Charles R Harris wrote: > > > On Thu, Jul 26, 2012 at 8:33 PM, The Helmbolds wrote: > The user guide says the algorithm for brentq should be obvious from inspecting the code. > > > > I wrote the code, but I would never call it obvious :0 It's rather subtle, and not as clear in its structured form as when written out with gotos since it is best understood as a state machine. The inverse quadratic interpolation isn't the subtle part, it's the control. Note that the form of the interpolation isn't that found in the original paper. > > Chuck > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joonpyro at gmail.com Fri Jul 27 12:30:29 2012 From: joonpyro at gmail.com (Joon Ro) Date: Fri, 27 Jul 2012 11:30:29 -0500 Subject: [SciPy-User] specifying range in scipy.stats.truncnorm Message-ID: <5012C225.3090507@gmail.com> Hi, I tried to use scipy.stats.truncnorm and found the way to specifying the parameters of truncated normal very confusing. I expected a, b parameter to be the specification of the interval where I want to truncate the distribution at, but it is not the case when the normal I want to use is not standard. According to the documentation, I need to standardize my values - for example, if I want to have a truncated normal with mean 0.5, variance 1, on [0, 1] interval, I need to do: myclip_a = 0 myclip_b = 1 my_mean=0.5 my_std =1 a, b = (myclip_a - my_mean) / my_std, (myclip_b - my_mean) / my_std rv = truncnorm(a, b, loc=my_mean, scale=my_std) Which is unnecessarily complicated in my opinion. Since we have to provide location and scale parameter anyway, why not make truncnorm to accept the actual interval values (in this case, a, b = 0, 1) instead and do the standardization internally? I think it would be more intuitive that way. Best regards, Joon -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Jul 27 13:07:15 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 27 Jul 2012 13:07:15 -0400 Subject: [SciPy-User] specifying range in scipy.stats.truncnorm In-Reply-To: <5012C225.3090507@gmail.com> References: <5012C225.3090507@gmail.com> Message-ID: On Fri, Jul 27, 2012 at 12:30 PM, Joon Ro wrote: > Hi, > > I tried to use scipy.stats.truncnorm and found the way to specifying the > parameters of truncated normal very confusing. > I expected a, b parameter to be the specification of the interval where I > want to truncate the distribution at, but it is not the case when the normal > I want to use is not standard. > > According to the documentation, I need to standardize my values - for > example, if I want to have a truncated normal with mean 0.5, variance 1, on > [0, 1] interval, I need to do: > > myclip_a = 0 > myclip_b = 1 > my_mean=0.5 > my_std =1 > > a, b = (myclip_a - my_mean) / my_std, (myclip_b - my_mean) / my_std > > rv = truncnorm(a, b, loc=my_mean, scale=my_std) > > Which is unnecessarily complicated in my opinion. Since we have to provide > location and scale parameter anyway, why not make truncnorm to accept the > actual interval values (in this case, a, b = 0, 1) instead and do the > standardization internally? I think it would be more intuitive that way. I agree there are several cases of distributions where the parameterization is not very intuitive or common. The problem is loc and scale and the corresponding transformation of the support is done generically. So, I don't think it's possible to change this without a change in the generic setup for the distributions or writing a specific dispatch function or class that does the conversion. I think, changing the generic setup would break the standard behavior of distributions that have a predefined finite support limit, like those that are defined for positive real numbers, a=0, or rdist with a=-1, b=1. Josef > > Best regards, > Joon > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From joonpyro at gmail.com Fri Jul 27 13:58:40 2012 From: joonpyro at gmail.com (Joon Ro) Date: Fri, 27 Jul 2012 12:58:40 -0500 Subject: [SciPy-User] specifying range in scipy.stats.truncnorm In-Reply-To: References: <5012C225.3090507@gmail.com> Message-ID: <5012D6D0.2020305@gmail.com> On Fri 27 Jul 2012 12:07:15 PM CDT, josef.pktd at gmail.com wrote: > On Fri, Jul 27, 2012 at 12:30 PM, Joon Ro wrote: >> Hi, >> >> I tried to use scipy.stats.truncnorm and found the way to specifying the >> parameters of truncated normal very confusing. >> I expected a, b parameter to be the specification of the interval where I >> want to truncate the distribution at, but it is not the case when the normal >> I want to use is not standard. >> >> According to the documentation, I need to standardize my values - for >> example, if I want to have a truncated normal with mean 0.5, variance 1, on >> [0, 1] interval, I need to do: >> >> myclip_a = 0 >> myclip_b = 1 >> my_mean=0.5 >> my_std =1 >> >> a, b = (myclip_a - my_mean) / my_std, (myclip_b - my_mean) / my_std >> >> rv = truncnorm(a, b, loc=my_mean, scale=my_std) >> >> Which is unnecessarily complicated in my opinion. Since we have to provide >> location and scale parameter anyway, why not make truncnorm to accept the >> actual interval values (in this case, a, b = 0, 1) instead and do the >> standardization internally? I think it would be more intuitive that way. > > I agree there are several cases of distributions where the > parameterization is not very intuitive or common. The problem is loc > and scale and the corresponding transformation of the support is done > generically. > > So, I don't think it's possible to change this without a change in the > generic setup for the distributions or writing a specific dispatch > function or class that does the conversion. > I think, changing the generic setup would break the standard behavior > of distributions that have a predefined finite support limit, like > those that are defined for positive real numbers, a=0, or rdist with > a=-1, b=1. > > Josef > I just took a look at the code, and I agree. I wonder if it would be possible to add a couple of more parameters (in this case, representing the not-standardized interval) with default None to the generic rv_continuous class and when they are passed instead of a and b, let a distribution specific function do the standardization and calculate a and b. -Joon From josef.pktd at gmail.com Fri Jul 27 14:39:50 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 27 Jul 2012 14:39:50 -0400 Subject: [SciPy-User] specifying range in scipy.stats.truncnorm In-Reply-To: <5012D6D0.2020305@gmail.com> References: <5012C225.3090507@gmail.com> <5012D6D0.2020305@gmail.com> Message-ID: On Fri, Jul 27, 2012 at 1:58 PM, Joon Ro wrote: > On Fri 27 Jul 2012 12:07:15 PM CDT, josef.pktd at gmail.com wrote: >> On Fri, Jul 27, 2012 at 12:30 PM, Joon Ro wrote: >>> Hi, >>> >>> I tried to use scipy.stats.truncnorm and found the way to specifying the >>> parameters of truncated normal very confusing. >>> I expected a, b parameter to be the specification of the interval where I >>> want to truncate the distribution at, but it is not the case when the normal >>> I want to use is not standard. >>> >>> According to the documentation, I need to standardize my values - for >>> example, if I want to have a truncated normal with mean 0.5, variance 1, on >>> [0, 1] interval, I need to do: >>> >>> myclip_a = 0 >>> myclip_b = 1 >>> my_mean=0.5 >>> my_std =1 >>> >>> a, b = (myclip_a - my_mean) / my_std, (myclip_b - my_mean) / my_std >>> >>> rv = truncnorm(a, b, loc=my_mean, scale=my_std) >>> >>> Which is unnecessarily complicated in my opinion. Since we have to provide >>> location and scale parameter anyway, why not make truncnorm to accept the >>> actual interval values (in this case, a, b = 0, 1) instead and do the >>> standardization internally? I think it would be more intuitive that way. >> >> I agree there are several cases of distributions where the >> parameterization is not very intuitive or common. The problem is loc >> and scale and the corresponding transformation of the support is done >> generically. >> >> So, I don't think it's possible to change this without a change in the >> generic setup for the distributions or writing a specific dispatch >> function or class that does the conversion. >> I think, changing the generic setup would break the standard behavior >> of distributions that have a predefined finite support limit, like >> those that are defined for positive real numbers, a=0, or rdist with >> a=-1, b=1. >> >> Josef >> > > I just took a look at the code, and I agree. > > I wonder if it would be possible to add a couple of more parameters (in > this case, representing the not-standardized interval) with default > None to the generic rv_continuous class and when they are passed > instead of a and b, let a distribution specific function do the > standardization and calculate a and b. a,b are set and would have to be adjusted in _argcheck. _argcheck is currently called only with the shape parameters, but not with loc and scale as argument. It would be possible to adjust this. My guess is that having _argcheck compensate for loc and scale should work. Having a possible change in behavior and extra parameters might get confusing. (distributions are instances and not classes, so care needs to be taken that there are no unwanted spillovers from one use to the next.) If you use frozen distributions, as in your initial example, then doing the reparameterization in the frozen class might be easier, then in the original classes. Josef > > -Joon > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From joonpyro at gmail.com Fri Jul 27 20:49:28 2012 From: joonpyro at gmail.com (Joon Ro) Date: Fri, 27 Jul 2012 19:49:28 -0500 Subject: [SciPy-User] specifying range in scipy.stats.truncnorm In-Reply-To: References: <5012C225.3090507@gmail.com> <5012D6D0.2020305@gmail.com> Message-ID: <50133718.5040203@gmail.com> On Fri 27 Jul 2012 01:39:50 PM CDT, josef.pktd at gmail.com wrote: > On Fri, Jul 27, 2012 at 1:58 PM, Joon Ro wrote: >> On Fri 27 Jul 2012 12:07:15 PM CDT, josef.pktd at gmail.com wrote: >>> On Fri, Jul 27, 2012 at 12:30 PM, Joon Ro wrote: >>>> Hi, >>>> >>>> I tried to use scipy.stats.truncnorm and found the way to specifying the >>>> parameters of truncated normal very confusing. >>>> I expected a, b parameter to be the specification of the interval where I >>>> want to truncate the distribution at, but it is not the case when the normal >>>> I want to use is not standard. >>>> >>>> According to the documentation, I need to standardize my values - for >>>> example, if I want to have a truncated normal with mean 0.5, variance 1, on >>>> [0, 1] interval, I need to do: >>>> >>>> myclip_a = 0 >>>> myclip_b = 1 >>>> my_mean=0.5 >>>> my_std =1 >>>> >>>> a, b = (myclip_a - my_mean) / my_std, (myclip_b - my_mean) / my_std >>>> >>>> rv = truncnorm(a, b, loc=my_mean, scale=my_std) >>>> >>>> Which is unnecessarily complicated in my opinion. Since we have to provide >>>> location and scale parameter anyway, why not make truncnorm to accept the >>>> actual interval values (in this case, a, b = 0, 1) instead and do the >>>> standardization internally? I think it would be more intuitive that way. >>> >>> I agree there are several cases of distributions where the >>> parameterization is not very intuitive or common. The problem is loc >>> and scale and the corresponding transformation of the support is done >>> generically. >>> >>> So, I don't think it's possible to change this without a change in the >>> generic setup for the distributions or writing a specific dispatch >>> function or class that does the conversion. >>> I think, changing the generic setup would break the standard behavior >>> of distributions that have a predefined finite support limit, like >>> those that are defined for positive real numbers, a=0, or rdist with >>> a=-1, b=1. >>> >>> Josef >>> >> >> I just took a look at the code, and I agree. >> >> I wonder if it would be possible to add a couple of more parameters (in >> this case, representing the not-standardized interval) with default >> None to the generic rv_continuous class and when they are passed >> instead of a and b, let a distribution specific function do the >> standardization and calculate a and b. > > a,b are set and would have to be adjusted in _argcheck. > _argcheck is currently called only with the shape parameters, but not > with loc and scale as argument. It would be possible to adjust this. > My guess is that having _argcheck compensate for loc and scale should > work. Having a possible change in behavior and extra parameters might > get confusing. (distributions are instances and not classes, so care > needs to be taken that there are no unwanted spillovers from one use > to the next.) > > If you use frozen distributions, as in your initial example, then > doing the reparameterization in the frozen class might be easier, then > in the original classes. > I also think changing what a and b represent is the best way but I wonder if it is okay (for compatibility reasons) -Joon From josef.pktd at gmail.com Fri Jul 27 23:05:02 2012 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 27 Jul 2012 23:05:02 -0400 Subject: [SciPy-User] specifying range in scipy.stats.truncnorm In-Reply-To: <50133718.5040203@gmail.com> References: <5012C225.3090507@gmail.com> <5012D6D0.2020305@gmail.com> <50133718.5040203@gmail.com> Message-ID: On Fri, Jul 27, 2012 at 8:49 PM, Joon Ro wrote: > On Fri 27 Jul 2012 01:39:50 PM CDT, josef.pktd at gmail.com wrote: >> On Fri, Jul 27, 2012 at 1:58 PM, Joon Ro wrote: >>> On Fri 27 Jul 2012 12:07:15 PM CDT, josef.pktd at gmail.com wrote: >>>> On Fri, Jul 27, 2012 at 12:30 PM, Joon Ro wrote: >>>>> Hi, >>>>> >>>>> I tried to use scipy.stats.truncnorm and found the way to specifying the >>>>> parameters of truncated normal very confusing. >>>>> I expected a, b parameter to be the specification of the interval where I >>>>> want to truncate the distribution at, but it is not the case when the normal >>>>> I want to use is not standard. >>>>> >>>>> According to the documentation, I need to standardize my values - for >>>>> example, if I want to have a truncated normal with mean 0.5, variance 1, on >>>>> [0, 1] interval, I need to do: >>>>> >>>>> myclip_a = 0 >>>>> myclip_b = 1 >>>>> my_mean=0.5 >>>>> my_std =1 >>>>> >>>>> a, b = (myclip_a - my_mean) / my_std, (myclip_b - my_mean) / my_std >>>>> >>>>> rv = truncnorm(a, b, loc=my_mean, scale=my_std) >>>>> >>>>> Which is unnecessarily complicated in my opinion. Since we have to provide >>>>> location and scale parameter anyway, why not make truncnorm to accept the >>>>> actual interval values (in this case, a, b = 0, 1) instead and do the >>>>> standardization internally? I think it would be more intuitive that way. >>>> >>>> I agree there are several cases of distributions where the >>>> parameterization is not very intuitive or common. The problem is loc >>>> and scale and the corresponding transformation of the support is done >>>> generically. >>>> >>>> So, I don't think it's possible to change this without a change in the >>>> generic setup for the distributions or writing a specific dispatch >>>> function or class that does the conversion. >>>> I think, changing the generic setup would break the standard behavior >>>> of distributions that have a predefined finite support limit, like >>>> those that are defined for positive real numbers, a=0, or rdist with >>>> a=-1, b=1. >>>> >>>> Josef >>>> >>> >>> I just took a look at the code, and I agree. >>> >>> I wonder if it would be possible to add a couple of more parameters (in >>> this case, representing the not-standardized interval) with default >>> None to the generic rv_continuous class and when they are passed >>> instead of a and b, let a distribution specific function do the >>> standardization and calculate a and b. >> >> a,b are set and would have to be adjusted in _argcheck. >> _argcheck is currently called only with the shape parameters, but not >> with loc and scale as argument. It would be possible to adjust this. >> My guess is that having _argcheck compensate for loc and scale should >> work. Having a possible change in behavior and extra parameters might >> get confusing. (distributions are instances and not classes, so care >> needs to be taken that there are no unwanted spillovers from one use >> to the next.) >> >> If you use frozen distributions, as in your initial example, then >> doing the reparameterization in the frozen class might be easier, then >> in the original classes. >> > > I also think changing what a and b represent is the best way but I > wonder if it is okay (for compatibility reasons) Not for the distribution classes, we need a and b for the standard(ized) distributions. Take for example lognorm, a=0 ind the standard case (loc=0, scale=1), and the lower bound of the shifted distribution is loc. Or pareto, a=1, lower bound of shifted distribution is loc+1 There is no separate way to choose a different lower bound, loc is the relevant parameter not a. Very few distribution, like the truncated distributions, have a and b explicitly as parameters. Josef > > -Joon > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From zfyuan at mail.ustc.edu.cn Sat Jul 28 12:23:00 2012 From: zfyuan at mail.ustc.edu.cn (Jeffrey) Date: Sun, 29 Jul 2012 00:23:00 +0800 Subject: [SciPy-User] scipy.stats.kendalltau bug? Message-ID: <501411E4.6060308@mail.ustc.edu.cn> Dear all, The sentences bellow will always raise an Error or Exception just as follows, which is a little anomaly. Is this a bug? >>> u1=numpy.random.rand(100000) >>> u2=numpy.random.rand(100000) >>> scipy.stats.kendalltau(u1,u2) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /home/zfyuan/phd/paper1/pyvine_lap/ in () ----> 1 sp.stats.kendalltau(u1,u2) /usr/lib64/python2.7/site-packages/scipy/stats/stats.pyc in kendalltau(x, y, initial_lexsort) 2673 2674 tau = ((tot - (v + u - t)) - 2.0 * exchanges) / \ -> 2675 np.sqrt((tot - u) * (tot - v)) 2676 2677 # what follows reproduces the ending of Gary Strangman's original AttributeError: sqrt -- Jeffrey From zfyuan at mail.ustc.edu.cn Sat Jul 28 12:48:17 2012 From: zfyuan at mail.ustc.edu.cn (Jeffrey) Date: Sun, 29 Jul 2012 00:48:17 +0800 Subject: [SciPy-User] scipy.stats.kendalltau bug? In-Reply-To: <501411E4.6060308@mail.ustc.edu.cn> References: <501411E4.6060308@mail.ustc.edu.cn> Message-ID: <501417D1.8050402@mail.ustc.edu.cn> On 07/29/2012 12:23 AM, Jeffrey wrote: > Dear all, > > The sentences bellow will always raise an Error or Exception just > as follows, which is a little anomaly. Is this a bug? > > >>> u1=numpy.random.rand(100000) > >>> u2=numpy.random.rand(100000) > >>> scipy.stats.kendalltau(u1,u2) > --------------------------------------------------------------------------- > > AttributeError Traceback (most recent call > last) > /home/zfyuan/phd/paper1/pyvine_lap/ in > () > ----> 1 sp.stats.kendalltau(u1,u2) > > /usr/lib64/python2.7/site-packages/scipy/stats/stats.pyc in > kendalltau(x, y, initial_lexsort) > 2673 > 2674 tau = ((tot - (v + u - t)) - 2.0 * exchanges) / \ > -> 2675 np.sqrt((tot - u) * (tot - v)) > 2676 > 2677 # what follows reproduces the ending of Gary Strangman's > original > > > AttributeError: sqrt > Sorry, I didn't describe this bug with details. What I mean is that when the two array have larger length, for example with length 100000, then it is more possible that the Error would occur. My scipy version is 0.9.0 and numpy is 1.6.2. Thanks a lot for your answering. -- Jeffrey From e.antero.tammi at gmail.com Sat Jul 28 13:06:25 2012 From: e.antero.tammi at gmail.com (eat) Date: Sat, 28 Jul 2012 20:06:25 +0300 Subject: [SciPy-User] scipy.stats.kendalltau bug? In-Reply-To: <501417D1.8050402@mail.ustc.edu.cn> References: <501411E4.6060308@mail.ustc.edu.cn> <501417D1.8050402@mail.ustc.edu.cn> Message-ID: Hi, On Sat, Jul 28, 2012 at 7:48 PM, Jeffrey wrote: > On 07/29/2012 12:23 AM, Jeffrey wrote: > > Dear all, > > > > The sentences bellow will always raise an Error or Exception just > > as follows, which is a little anomaly. Is this a bug? > > > > >>> u1=numpy.random.rand(100000) > > >>> u2=numpy.random.rand(100000) > > >>> scipy.stats.kendalltau(u1,u2) > > > --------------------------------------------------------------------------- > > > > AttributeError Traceback (most recent call > > last) > > /home/zfyuan/phd/paper1/pyvine_lap/ in > > () > > ----> 1 sp.stats.kendalltau(u1,u2) > > > > /usr/lib64/python2.7/site-packages/scipy/stats/stats.pyc in > > kendalltau(x, y, initial_lexsort) > > 2673 > > 2674 tau = ((tot - (v + u - t)) - 2.0 * exchanges) / \ > > -> 2675 np.sqrt((tot - u) * (tot - v)) > > 2676 > > 2677 # what follows reproduces the ending of Gary Strangman's > > original > > > > > > AttributeError: sqrt > > > > Sorry, I didn't describe this bug with details. What I mean is that when > the two array have larger length, for example with length 100000, then > it is more possible that the Error would occur. > > My scipy version is 0.9.0 and numpy is 1.6.2. > > Thanks a lot for your answering. > I can confirm this, like In []: os.sys.version Out[]: '2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)]' In []: np.version.version Out[]: '1.6.0' In []: sp.version.version Out[]: '0.9.0' In []: stats.kendalltau(rand(77929), rand(77929)) Out[]: (0.0060807135427758865, 0.010891543687108114) In []: stats.kendalltau(rand(77939), rand(77939)) ------------------------------------------------------------ Traceback (most recent call last): File "", line 1, in File "C:\Python27\lib\site-packages\scipy\stats\stats.py", line 2675, in kendalltau np.sqrt((tot - u) * (tot - v)) AttributeError: sqrt There really seems to be odd problem above a certain length of arrays. My 2 cents, -eat > > -- > > Jeffrey > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aclark at aclark.net Sat Jul 28 19:18:33 2012 From: aclark at aclark.net (Alex Clark) Date: Sat, 28 Jul 2012 19:18:33 -0400 Subject: [SciPy-User] ANN: pythonpackages.com beta Message-ID: Hi Science folks, I am reaching out to various Python-related programming communities in order to offer new help packaging your software. If you have ever struggled with packaging and releasing Python software (e.g. to PyPI), please check out this service: - http://pythonpackages.com The basic idea is to automate packaging by checking out code, testing, and uploading (e.g. to PyPI) all through the web, as explained in this introduction: - http://docs.pythonpackages.com/en/latest/introduction.html Also, I will be available to answer your Python packaging questions most days/nights in #pythonpackages on irc.freenode.net. Hope to meet/talk with all of you soon. Alex -- Alex Clark ? http://pythonpackages.com/ONE_CLICK From dalupus at gmail.com Sat Jul 28 19:37:02 2012 From: dalupus at gmail.com (Michael Crawford) Date: Sat, 28 Jul 2012 19:37:02 -0400 Subject: [SciPy-User] error installing scipy 0.10.1 with python 3.2 Message-ID: <8B0ACCCC-4E49-4266-BEC1-79D430ED1CF1@gmail.com> I am trying to get scipy installed for python 3.2 on OSX 10.8 and am receiving the following compile error: building 'scipy.sparse.linalg.dsolve._superlu' extension compiling C sources C compiler: /usr/bin/clang -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -Os -w -pipe -march=native -Qunused-arguments -mmacosx-version-min=10.8 compile options: '-DNO_ATLAS_INFO=3 -DUSE_VENDOR_BLAS=1 -I/Users/dalupus/.virtualenvs/test3/lib/python3.2/site-packages/numpy/core/include -I/usr/local/bin/../Cellar/python3/3.2.3/include/python3.2m -c' extra options: '-msse3' clang: scipy/sparse/linalg/dsolve/_superluobject.c clang: scipy/sparse/linalg/dsolve/_superlu_utils.c clang: scipy/sparse/linalg/dsolve/_superlumodule.c scipy/sparse/linalg/dsolve/_superlumodule.c:268:9: error: non-void function 'PyInit__superlu' should return a value [-Wreturn-type] return; I have no idea where to even start to fix this. Any ideas are appreciated. Thanks, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From guziy.sasha at gmail.com Sat Jul 28 19:46:40 2012 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Sat, 28 Jul 2012 19:46:40 -0400 Subject: [SciPy-User] error installing scipy 0.10.1 with python 3.2 In-Reply-To: <8B0ACCCC-4E49-4266-BEC1-79D430ED1CF1@gmail.com> References: <8B0ACCCC-4E49-4266-BEC1-79D430ED1CF1@gmail.com> Message-ID: Hi, what happens if you change the line 268 ( in scipy/sparse/linalg/dsolve/_superlumodule.c) from return; to return NULL; Cheers -- Oleksandr Huziy 2012/7/28 Michael Crawford > I am trying to get scipy installed for python 3.2 on OSX 10.8 and am > receiving the following compile error: > > building 'scipy.sparse.linalg.dsolve._superlu' extension > compiling C sources > C compiler: /usr/bin/clang -DNDEBUG -g -fwrapv -O3 -Wall > -Wstrict-prototypes -Os -w -pipe -march=native -Qunused-arguments > -mmacosx-version-min=10.8 > > compile options: '-DNO_ATLAS_INFO=3 -DUSE_VENDOR_BLAS=1 > -I/Users/dalupus/.virtualenvs/test3/lib/python3.2/site-packages/numpy/core/include > -I/usr/local/bin/../Cellar/python3/3.2.3/include/python3.2m -c' > extra options: '-msse3' > clang: scipy/sparse/linalg/dsolve/_superluobject.c > clang: scipy/sparse/linalg/dsolve/_superlu_utils.c > clang: scipy/sparse/linalg/dsolve/_superlumodule.c > scipy/sparse/linalg/dsolve/_superlumodule.c:268:9: error: non-void > function 'PyInit__superlu' should return a value [-Wreturn-type] > return; > > I have no idea where to even start to fix this. Any ideas are > appreciated. > > Thanks, > Mike > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalupus at gmail.com Sat Jul 28 19:55:31 2012 From: dalupus at gmail.com (Michael Crawford) Date: Sat, 28 Jul 2012 19:55:31 -0400 Subject: [SciPy-User] error installing scipy 0.10.1 with python 3.2 In-Reply-To: References: <8B0ACCCC-4E49-4266-BEC1-79D430ED1CF1@gmail.com> Message-ID: <231CC66C-D941-4B02-8B61-4BDD96845E38@gmail.com> Then it compiles and I can run install :) Thanks a ton. On Jul 28, 2012, at 7:46 PM, Oleksandr Huziy wrote: > Hi, > > what happens if you change the line 268 ( in scipy/sparse/linalg/dsolve/_superlumodule.c) > > from > return; > > to > return NULL; > > Cheers > > -- > Oleksandr Huziy > > 2012/7/28 Michael Crawford > I am trying to get scipy installed for python 3.2 on OSX 10.8 and am receiving the following compile error: > > building 'scipy.sparse.linalg.dsolve._superlu' extension > compiling C sources > C compiler: /usr/bin/clang -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -Os -w -pipe -march=native -Qunused-arguments -mmacosx-version-min=10.8 > > compile options: '-DNO_ATLAS_INFO=3 -DUSE_VENDOR_BLAS=1 -I/Users/dalupus/.virtualenvs/test3/lib/python3.2/site-packages/numpy/core/include -I/usr/local/bin/../Cellar/python3/3.2.3/include/python3.2m -c' > extra options: '-msse3' > clang: scipy/sparse/linalg/dsolve/_superluobject.c > clang: scipy/sparse/linalg/dsolve/_superlu_utils.c > clang: scipy/sparse/linalg/dsolve/_superlumodule.c > scipy/sparse/linalg/dsolve/_superlumodule.c:268:9: error: non-void function 'PyInit__superlu' should return a value [-Wreturn-type] > return; > > I have no idea where to even start to fix this. Any ideas are appreciated. > > Thanks, > Mike > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Sat Jul 28 21:09:52 2012 From: cgohlke at uci.edu (Christoph Gohlke) Date: Sat, 28 Jul 2012 18:09:52 -0700 Subject: [SciPy-User] error installing scipy 0.10.1 with python 3.2 In-Reply-To: <231CC66C-D941-4B02-8B61-4BDD96845E38@gmail.com> References: <8B0ACCCC-4E49-4266-BEC1-79D430ED1CF1@gmail.com> <231CC66C-D941-4B02-8B61-4BDD96845E38@gmail.com> Message-ID: <50148D60.8090307@uci.edu> I submitted a pull request so this doesn't get lost: https://github.com/scipy/scipy/pull/279 Christoph On 7/28/2012 4:55 PM, Michael Crawford wrote: > Then it compiles and I can run install :) > Thanks a ton. > > > On Jul 28, 2012, at 7:46 PM, Oleksandr Huziy > wrote: > >> Hi, >> >> what happens if you change the line 268 ( in >> scipy/sparse/linalg/dsolve/_superlumodule.c) >> >> from >> return; >> >> to >> return NULL; >> >> Cheers >> >> -- >> Oleksandr Huziy >> >> 2012/7/28 Michael Crawford > >> >> I am trying to get scipy installed for python 3.2 on OSX 10.8 and >> am receiving the following compile error: >> >> building 'scipy.sparse.linalg.dsolve._superlu' extension >> compiling C sources >> C compiler: /usr/bin/clang -DNDEBUG -g -fwrapv -O3 -Wall >> -Wstrict-prototypes -Os -w -pipe -march=native -Qunused-arguments >> -mmacosx-version-min=10.8 >> >> compile options: '-DNO_ATLAS_INFO=3 -DUSE_VENDOR_BLAS=1 >> -I/Users/dalupus/.virtualenvs/test3/lib/python3.2/site-packages/numpy/core/include >> -I/usr/local/bin/../Cellar/python3/3.2.3/include/python3.2m -c' >> extra options: '-msse3' >> clang: scipy/sparse/linalg/dsolve/_superluobject.c >> clang: scipy/sparse/linalg/dsolve/_superlu_utils.c >> clang: scipy/sparse/linalg/dsolve/_superlumodule.c >> scipy/sparse/linalg/dsolve/_superlumodule.c:268:9: error: non-void >> function 'PyInit__superlu' should return a value [-Wreturn-type] >> return; >> >> I have no idea where to even start to fix this. Any ideas are >> appreciated. >> >> Thanks, >> Mike >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From zfyuan at mail.ustc.edu.cn Sun Jul 29 03:27:30 2012 From: zfyuan at mail.ustc.edu.cn (Jeffrey) Date: Sun, 29 Jul 2012 15:27:30 +0800 Subject: [SciPy-User] scipy.stats.kendalltau bug? In-Reply-To: References: <501411E4.6060308@mail.ustc.edu.cn> <501417D1.8050402@mail.ustc.edu.cn> Message-ID: <5014E5E2.1010002@mail.ustc.edu.cn> On 07/29/2012 01:06 AM, eat wrote: > Hi, > > On Sat, Jul 28, 2012 at 7:48 PM, Jeffrey > wrote: > > On 07/29/2012 12:23 AM, Jeffrey wrote: > > Dear all, > > > > The sentences bellow will always raise an Error or Exception > just > > as follows, which is a little anomaly. Is this a bug? > > > > >>> u1=numpy.random.rand(100000) > > >>> u2=numpy.random.rand(100000) > > >>> scipy.stats.kendalltau(u1,u2) > > > --------------------------------------------------------------------------- > > > > AttributeError Traceback (most recent > call > > last) > > > /home/zfyuan/phd/paper1/pyvine_lap/ in > > () > > ----> 1 sp.stats.kendalltau(u1,u2) > > > > /usr/lib64/python2.7/site-packages/scipy/stats/stats.pyc in > > kendalltau(x, y, initial_lexsort) > > 2673 > > 2674 tau = ((tot - (v + u - t)) - 2.0 * exchanges) / \ > > -> 2675 np.sqrt((tot - u) * (tot - v)) > > 2676 > > 2677 # what follows reproduces the ending of Gary Strangman's > > original > > > > > > AttributeError: sqrt > > > > Sorry, I didn't describe this bug with details. What I mean is > that when > the two array have larger length, for example with length 100000, then > it is more possible that the Error would occur. > > My scipy version is 0.9.0 and numpy is 1.6.2. > > Thanks a lot for your answering. > > I can confirm this, like > In []: os.sys.version > Out[]: '2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit > (Intel)]' > In []: np.version.version > Out[]: '1.6.0' > In []: sp.version.version > Out[]: '0.9.0' > > In []: stats.kendalltau(rand(77929), rand(77929)) > Out[]: (0.0060807135427758865, 0.010891543687108114) > In []: stats.kendalltau(rand(77939), rand(77939)) > ------------------------------------------------------------ > Traceback (most recent call last): > File "", line 1, in > File "C:\Python27\lib\site-packages\scipy\stats\stats.py", line > 2675, in kendalltau > np.sqrt((tot - u) * (tot - v)) > AttributeError: sqrt > > There really seems to be odd problem above a certain length of arrays. > > > My 2 cents, > -eat > > > -- > > Jeffrey > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Thanks eat. I found the reason is that numpy.sqrt cannot deal with too large number. When calculating kendalltau, assume n=len(x),then the total pair number is 'tot' below: tot=(n-1)*n//2 when calculating tau, the de-numerator is as below: np.sqrt((tot-u)*(tot-v)) u and v stands for ties in x[] and y[perm[]], which is zero if the two array sample from continuous dist. Hence (tot-u)*(tot-v) may be out of range for the C written ufunc 'np.sqrt', and an Error is then raised. What about using math.sqrt here, or multiply two np.sqrt in the de-numerator? Since big data sets are often seen these days. Thanks a lot ! -- Jeffrey -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.raybaut at gmail.com Sun Jul 29 03:42:50 2012 From: pierre.raybaut at gmail.com (Pierre Raybaut) Date: Sun, 29 Jul 2012 09:42:50 +0200 Subject: [SciPy-User] ANN: Spyder v2.1.11 Message-ID: Hi all, On the behalf of Spyder's development team (http://code.google.com/p/spyderlib/people/list), I'm pleased to announce that Spyder v2.1.11 has been released and is available for Windows XP/Vista/7, GNU/Linux and MacOS X: http://code.google.com/p/spyderlib/ This is a pure maintenance release -- a lot of bugs were fixed since v2.1.10: http://code.google.com/p/spyderlib/wiki/ChangeLog Spyder is a free, open-source (MIT license) interactive development environment for the Python language with advanced editing, interactive testing, debugging and introspection features. Originally designed to provide MATLAB-like features (integrated help, interactive console, variable explorer with GUI-based editors for dictionaries, NumPy arrays, ...), it is strongly oriented towards scientific computing and software development. Thanks to the `spyderlib` library, Spyder also provides powerful ready-to-use widgets: embedded Python console (example: http://packages.python.org/guiqwt/_images/sift3.png), NumPy array editor (example: http://packages.python.org/guiqwt/_images/sift2.png), dictionary editor, source code editor, etc. Description of key features with tasty screenshots can be found at: http://code.google.com/p/spyderlib/wiki/Features On Windows platforms, Spyder is also available as a stand-alone executable (don't forget to disable UAC on Vista/7). This all-in-one portable version is still experimental (for example, it does not embed sphinx -- meaning no rich text mode for the object inspector) but it should provide a working version of Spyder for Windows platforms without having to install anything else (except Python 2.x itself, of course). Don't forget to follow Spyder updates/news: * on the project website: http://code.google.com/p/spyderlib/ * and on our official blog: http://spyder-ide.blogspot.com/ Last, but not least, we welcome any contribution that helps making Spyder an efficient scientific development/computing environment. Join us to help creating your favourite environment! (http://code.google.com/p/spyderlib/wiki/NoteForContributors) Enjoy! -Pierre From njs at pobox.com Sun Jul 29 03:47:02 2012 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 29 Jul 2012 08:47:02 +0100 Subject: [SciPy-User] scipy.stats.kendalltau bug? In-Reply-To: <5014E5E2.1010002@mail.ustc.edu.cn> References: <501411E4.6060308@mail.ustc.edu.cn> <501417D1.8050402@mail.ustc.edu.cn> <5014E5E2.1010002@mail.ustc.edu.cn> Message-ID: On Sun, Jul 29, 2012 at 8:27 AM, Jeffrey wrote: > Thanks eat. I found the reason is that numpy.sqrt cannot deal with too large > number. When calculating kendalltau, assume n=len(x),then the total pair > number is 'tot' below: > > tot=(n-1)*n//2 > > when calculating tau, the de-numerator is as below: > > np.sqrt((tot-u)*(tot-v)) > > u and v stands for ties in x[] and y[perm[]], which is zero if the two array > sample from continuous dist. Hence (tot-u)*(tot-v) may be out of range for > the C written ufunc 'np.sqrt', and an Error is then raised. > > What about using math.sqrt here, or multiply two np.sqrt in the > de-numerator? Since big data sets are often seen these days. It seems like the bug is that np.sqrt is raising an AttributeError on valid input... can you give an example of a value that np.sqrt fails on? Like >>> np.sqrt() AttributeError -n From zfyuan at mail.ustc.edu.cn Sun Jul 29 05:42:52 2012 From: zfyuan at mail.ustc.edu.cn (Jeffrey) Date: Sun, 29 Jul 2012 17:42:52 +0800 Subject: [SciPy-User] scipy.stats.kendalltau bug? In-Reply-To: References: <501411E4.6060308@mail.ustc.edu.cn> <501417D1.8050402@mail.ustc.edu.cn> <5014E5E2.1010002@mail.ustc.edu.cn> Message-ID: <5015059C.60406@mail.ustc.edu.cn> On 07/29/2012 03:47 PM, Nathaniel Smith wrote: > On Sun, Jul 29, 2012 at 8:27 AM, Jeffrey wrote: >> Thanks eat. I found the reason is that numpy.sqrt cannot deal with too large >> number. When calculating kendalltau, assume n=len(x),then the total pair >> number is 'tot' below: >> >> tot=(n-1)*n//2 >> >> when calculating tau, the de-numerator is as below: >> >> np.sqrt((tot-u)*(tot-v)) >> >> u and v stands for ties in x[] and y[perm[]], which is zero if the two array >> sample from continuous dist. Hence (tot-u)*(tot-v) may be out of range for >> the C written ufunc 'np.sqrt', and an Error is then raised. >> >> What about using math.sqrt here, or multiply two np.sqrt in the >> de-numerator? Since big data sets are often seen these days. > It seems like the bug is that np.sqrt is raising an AttributeError on > valid input... can you give an example of a value that np.sqrt fails > on? Like Assume the input array x and y has n=100000 length, which is common seen, and assume there is no tie in both x and y, hence u=0, v=0 and t=0 in the scipy.stats.kendalltau subroutine. Hence the de-numerator of expression for calculating tau would be as follows: np.sqrt( (tot-u) * (tot-v) ) Here above, tot= n * (n-1) //2=499950000, and (tot-u) * (tot-v)= tot*tot = 24999500002500000000L, this long int will raise Error when np.sqrt is applied. I think type convert, like 'float()' should be done before np.sqrt, or write like np.sqrt(tot-u) * np.sqrt(tot-v) to avoid long integer. Thanks a lot : ) >>>> np.sqrt() > AttributeError > > -n > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- ??? ???????????? ????????230026 ?????13155190081 From njs at pobox.com Sun Jul 29 06:30:51 2012 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 29 Jul 2012 11:30:51 +0100 Subject: [SciPy-User] scipy.stats.kendalltau bug? In-Reply-To: <5015059C.60406@mail.ustc.edu.cn> References: <501411E4.6060308@mail.ustc.edu.cn> <501417D1.8050402@mail.ustc.edu.cn> <5014E5E2.1010002@mail.ustc.edu.cn> <5015059C.60406@mail.ustc.edu.cn> Message-ID: On Sun, Jul 29, 2012 at 10:42 AM, Jeffrey wrote: > On 07/29/2012 03:47 PM, Nathaniel Smith wrote: >> On Sun, Jul 29, 2012 at 8:27 AM, Jeffrey wrote: >>> Thanks eat. I found the reason is that numpy.sqrt cannot deal with too large >>> number. When calculating kendalltau, assume n=len(x),then the total pair >>> number is 'tot' below: >>> >>> tot=(n-1)*n//2 >>> >>> when calculating tau, the de-numerator is as below: >>> >>> np.sqrt((tot-u)*(tot-v)) >>> >>> u and v stands for ties in x[] and y[perm[]], which is zero if the two array >>> sample from continuous dist. Hence (tot-u)*(tot-v) may be out of range for >>> the C written ufunc 'np.sqrt', and an Error is then raised. >>> >>> What about using math.sqrt here, or multiply two np.sqrt in the >>> de-numerator? Since big data sets are often seen these days. >> It seems like the bug is that np.sqrt is raising an AttributeError on >> valid input... can you give an example of a value that np.sqrt fails >> on? Like > > Assume the input array x and y has n=100000 length, which is common > seen, and assume there is no tie in both x and y, hence u=0, v=0 and t=0 > in the scipy.stats.kendalltau subroutine. Hence the de-numerator of > expression for calculating tau would be as follows: > > np.sqrt( (tot-u) * (tot-v) ) > > Here above, tot= n * (n-1) //2=499950000, and (tot-u) * (tot-v)= tot*tot > = 24999500002500000000L, this long int will raise Error when np.sqrt is > applied. I think type convert, like 'float()' should be done before > np.sqrt, or write like np.sqrt(tot-u) * np.sqrt(tot-v) to avoid long > integer. > > Thanks a lot : ) Thanks, that clarifies things: https://github.com/numpy/numpy/issues/368 For now, yeah, some sort of workaround makes sense, though... in addition to the ones you mention, I noticed that this also seems to work: np.sqrt(bignum, dtype=float) You should submit a pull request :-). -n From zfyuan at mail.ustc.edu.cn Sun Jul 29 07:25:51 2012 From: zfyuan at mail.ustc.edu.cn (Jeffrey) Date: Sun, 29 Jul 2012 19:25:51 +0800 Subject: [SciPy-User] scipy.stats.kendalltau bug? In-Reply-To: References: <501411E4.6060308@mail.ustc.edu.cn> <501417D1.8050402@mail.ustc.edu.cn> <5014E5E2.1010002@mail.ustc.edu.cn> <5015059C.60406@mail.ustc.edu.cn> Message-ID: <50151DBF.7080205@mail.ustc.edu.cn> On 07/29/2012 06:30 PM, Nathaniel Smith wrote: > On Sun, Jul 29, 2012 at 10:42 AM, Jeffrey wrote: >> On 07/29/2012 03:47 PM, Nathaniel Smith wrote: >>> On Sun, Jul 29, 2012 at 8:27 AM, Jeffrey wrote: >>>> Thanks eat. I found the reason is that numpy.sqrt cannot deal with too large >>>> number. When calculating kendalltau, assume n=len(x),then the total pair >>>> number is 'tot' below: >>>> >>>> tot=(n-1)*n//2 >>>> >>>> when calculating tau, the de-numerator is as below: >>>> >>>> np.sqrt((tot-u)*(tot-v)) >>>> >>>> u and v stands for ties in x[] and y[perm[]], which is zero if the two array >>>> sample from continuous dist. Hence (tot-u)*(tot-v) may be out of range for >>>> the C written ufunc 'np.sqrt', and an Error is then raised. >>>> >>>> What about using math.sqrt here, or multiply two np.sqrt in the >>>> de-numerator? Since big data sets are often seen these days. >>> It seems like the bug is that np.sqrt is raising an AttributeError on >>> valid input... can you give an example of a value that np.sqrt fails >>> on? Like >> Assume the input array x and y has n=100000 length, which is common >> seen, and assume there is no tie in both x and y, hence u=0, v=0 and t=0 >> in the scipy.stats.kendalltau subroutine. Hence the de-numerator of >> expression for calculating tau would be as follows: >> >> np.sqrt( (tot-u) * (tot-v) ) >> >> Here above, tot= n * (n-1) //2=499950000, and (tot-u) * (tot-v)= tot*tot >> = 24999500002500000000L, this long int will raise Error when np.sqrt is >> applied. I think type convert, like 'float()' should be done before >> np.sqrt, or write like np.sqrt(tot-u) * np.sqrt(tot-v) to avoid long >> integer. >> >> Thanks a lot : ) > Thanks, that clarifies things: https://github.com/numpy/numpy/issues/368 > > For now, yeah, some sort of workaround makes sense, though... in > addition to the ones you mention, I noticed that this also seems to > work: > > np.sqrt(bignum, dtype=float) > > You should submit a pull request :-). > > -n > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > :-) thanks for your pull request. My English is a little pool, and I'm new to Python. -- Jeffrey From helmrp at yahoo.com Sun Jul 29 09:30:36 2012 From: helmrp at yahoo.com (The Helmbolds) Date: Sun, 29 Jul 2012 06:30:36 -0700 (PDT) Subject: [SciPy-User] SciPY's "Brentq" routine Message-ID: <1343568636.3797.YahooMailNeo@web31805.mail.mud.yahoo.com> Well, aparently this should be called the "Harris" algorithm. It may have been inspired by the Brent algorithm, but appears to differ frm it in significant ways. Did you ever publish this? Have a reference to it? ? If not, could you supply a description suitable for inclusion in the SciPy documentation? Thanks Bob -------------- next part -------------- An HTML attachment was scrubbed... URL: From helmrp at yahoo.com Sun Jul 29 17:22:49 2012 From: helmrp at yahoo.com (The Helmbolds) Date: Sun, 29 Jul 2012 14:22:49 -0700 (PDT) Subject: [SciPy-User] Optimize fmin_cg issue Message-ID: <1343596969.20649.YahooMailNeo@web31806.mail.mud.yahoo.com> There appears to be something NQR (not quite right) with the SciPy optimize fmin_cg routine's handling of certain input values.?Consider the following. Yes, I know the inputs are odd and far from the true optimal values, but I was experimenting to see what would happen with steep-sloped objective function. In the following, I use a scale factor to multiply the value of the objective function. ? As you will see, the output flags an error in "double_scalars". I dunno what they are, but assume it's a C-code item. Also, ? ? Enter guessed initial value of x, x0 = 10 ? Enter guessed initial value of y, y0 = 15 ? Use scale value, scale = 10 ??????????????????????? ?? With f(z) = 10 * optimize.rosen(z) and fprime(z) = 10 * optimize.rosen_der(z), ????? use SciPy's 'fmin_cg' Polak-Ribiere method to seek the minimum value of f: ?? Use start value, z0 = [10 15]: ????? At start, f(z0) = 7225810.0 ????? At start, fprime(z0) = [3400180 -170000]) ????? At start, Inf-norm of fprime(z0) = 3400180 ?? Use the call: ????? res = sp.optimize.fmin_cg(f, z0, fprime, gtol=1e-05, norm=sp.Inf) ????????????????????? Warning (from warnings module): ? File "C:\Python27\lib\site-packages\scipy\optimize\linesearch.py", line 432 ??? B = (fb-D-C*db)/(db*db) RuntimeWarning: overflow encountered in double_scalars Warning (from warnings module): ? File "C:\Python27\lib\site-packages\scipy\optimize\linesearch.py", line 412 ??? A /= denom RuntimeWarning: divide by zero encountered in double_scalars Warning (from warnings module): ? File "C:\Python27\lib\site-packages\scipy\optimize\linesearch.py", line 413 ??? B /= denom RuntimeWarning: divide by zero encountered in double_scalars Warning (from warnings module): ? File "C:\Python27\lib\site-packages\scipy\optimize\linesearch.py", line 414 ??? radical = B*B-3*A*C RuntimeWarning: invalid value encountered in double_scalars Optimization terminated successfully.????????????????????????????????????????????????? ???????? Current function value: nan ???????? Iterations: 7 ???????? Function evaluations: 109 ???????? Gradient evaluations: 42 ?? fmin_cg returned res = [ nan? nan] ?? At end, zopt = [ nan? nan] ?? At end, f(zopt) = nan ?? At end, fprime(zopt) = [ nan? nan] ?? At end, Inf-norm of fprime(zopt) = nan -------------- next part -------------- An HTML attachment was scrubbed... URL: From guziy.sasha at gmail.com Sun Jul 29 20:04:35 2012 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Sun, 29 Jul 2012 20:04:35 -0400 Subject: [SciPy-User] Optimize fmin_cg issue In-Reply-To: <1343596969.20649.YahooMailNeo@web31806.mail.mud.yahoo.com> References: <1343596969.20649.YahooMailNeo@web31806.mail.mud.yahoo.com> Message-ID: Hi, I have no idea what is the functio you are minimizing but on my machine I get the following: In [11]: res = optimize.fmin_cg(f, z0, fprime,gtol =1e-5, norm=sp.Inf) Optimization terminated successfully. Current function value: 0.000000 Iterations: 51 Function evaluations: 169 Gradient evaluations: 136 In [12]: res Out[12]: array([ 1.00000022, 1.00000044]) Cheers -- Oleksandr Huziy 2012/7/29 The Helmbolds > There appears to be something NQR (not quite right) with the SciPy > optimize fmin_cg routine's handling of certain input values. Consider the > following. Yes, I know the inputs are odd and far from the true optimal > values, but I was experimenting to see what would happen with steep-sloped > objective function. In the following, I use a scale factor to multiply the > value of the objective function. > > As you will see, the output flags an error in "double_scalars". I dunno > what they are, but assume it's a C-code item. Also, brackets like this are my annotations.> > > Enter guessed initial value of x, x0 = 10 > Enter guessed initial value of y, y0 = 15 > Use scale value, scale = 10 > > With f(z) = 10 * optimize.rosen(z) and fprime(z) = 10 * > optimize.rosen_der(z), > use SciPy's 'fmin_cg' Polak-Ribiere method to seek the minimum value > of f: > Use start value, z0 = [10 15]: > At start, f(z0) = 7225810.0 > At start, fprime(z0) = [3400180 -170000]) > At start, Inf-norm of fprime(z0) = 3400180 > Use the call: > res = sp.optimize.fmin_cg(f, z0, fprime, gtol=1e-05, norm=sp.Inf) > > Warning (from warnings module): > File "C:\Python27\lib\site-packages\scipy\optimize\linesearch.py", line > 432 > B = (fb-D-C*db)/(db*db) > RuntimeWarning: overflow encountered in double_scalars > Warning (from warnings module): > File "C:\Python27\lib\site-packages\scipy\optimize\linesearch.py", line > 412 > A /= denom > RuntimeWarning: divide by zero encountered in double_scalars > Warning (from warnings module): > File "C:\Python27\lib\site-packages\scipy\optimize\linesearch.py", line > 413 > B /= denom > RuntimeWarning: divide by zero encountered in double_scalars > Warning (from warnings module): > File "C:\Python27\lib\site-packages\scipy\optimize\linesearch.py", line > 414 > radical = B*B-3*A*C > RuntimeWarning: invalid value encountered in double_scalars > Optimization terminated > successfully. > > Current function value: nan > Iterations: 7 > Function evaluations: 109 > Gradient evaluations: 42 > fmin_cg returned res = [ nan nan] > At end, zopt = [ nan nan] > At end, f(zopt) = nan > At end, fprime(zopt) = [ nan nan] > At end, Inf-norm of fprime(zopt) = nan > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aronne.merrelli at gmail.com Sun Jul 29 21:01:25 2012 From: aronne.merrelli at gmail.com (Aronne Merrelli) Date: Sun, 29 Jul 2012 20:01:25 -0500 Subject: [SciPy-User] Optimize fmin_cg issue In-Reply-To: References: <1343596969.20649.YahooMailNeo@web31806.mail.mud.yahoo.com> Message-ID: On Sun, Jul 29, 2012 at 7:04 PM, Oleksandr Huziy wrote: > Hi, > > I have no idea what is the functio you are minimizing but on my machine I > get the following: > > In [11]: res = optimize.fmin_cg(f, z0, fprime,gtol =1e-5, norm=sp.Inf) > Optimization terminated successfully. > Current function value: 0.000000 > Iterations: 51 > Function evaluations: 169 > Gradient evaluations: 136 > > In [12]: res > Out[12]: array([ 1.00000022, 1.00000044]) > I get the same result, but I get a very similar NaN result as the OP if I try with a point farther from zero (try [100,150] instead of [10,15]). That suggests this is a 32bit vs 64bit float overflow issue - perhaps OP is using 32 bit and we are using 64 bit floats. So, I doubt the algorithm has a bug per se, but the output message could be improved. I'm not an expert at this part of SciPy, but my guess is that the iteration loop just isn't checking for NaN values to determine the "success" of the iterative loop. Looking inside fmin_cg in optimize.py (moved to _minimize_cg() in the current github), I think what might be happening is that the gnorm variable is NaN, and then this causes the while loop to terminate since the while loop's conditional is (gnorm > gtol), and that will be false. So the later code just sees an exit of the while loop well short of the iteration maximum, and it only checks the iteration limit and step size (alpha_k) to determine "success". [1] https://github.com/scipy/scipy/blob/master/scipy/optimize/optimize.py Cheers, Aronne > > > 2012/7/29 The Helmbolds >> >> There appears to be something NQR (not quite right) with the SciPy >> optimize fmin_cg routine's handling of certain input values. Consider the >> following. Yes, I know the inputs are odd and far from the true optimal >> values, but I was experimenting to see what would happen with steep-sloped >> objective function. In the following, I use a scale factor to multiply the >> value of the objective function. >> >> As you will see, the output flags an error in "double_scalars". I dunno >> what they are, but assume it's a C-code item. Also, > like this are my annotations.> >> >> Enter guessed initial value of x, x0 = 10 >> Enter guessed initial value of y, y0 = 15 >> Use scale value, scale = 10 >> >> With f(z) = 10 * optimize.rosen(z) and fprime(z) = 10 * >> optimize.rosen_der(z), >> use SciPy's 'fmin_cg' Polak-Ribiere method to seek the minimum value >> of f: >> Use start value, z0 = [10 15]: >> At start, f(z0) = 7225810.0 >> At start, fprime(z0) = [3400180 -170000]) >> At start, Inf-norm of fprime(z0) = 3400180 >> Use the call: >> res = sp.optimize.fmin_cg(f, z0, fprime, gtol=1e-05, norm=sp.Inf) >> >> Warning (from warnings module): >> File "C:\Python27\lib\site-packages\scipy\optimize\linesearch.py", line >> 432 >> B = (fb-D-C*db)/(db*db) >> RuntimeWarning: overflow encountered in double_scalars >> Warning (from warnings module): >> File "C:\Python27\lib\site-packages\scipy\optimize\linesearch.py", line >> 412 >> A /= denom >> RuntimeWarning: divide by zero encountered in double_scalars >> Warning (from warnings module): >> File "C:\Python27\lib\site-packages\scipy\optimize\linesearch.py", line >> 413 >> B /= denom >> RuntimeWarning: divide by zero encountered in double_scalars >> Warning (from warnings module): >> File "C:\Python27\lib\site-packages\scipy\optimize\linesearch.py", line >> 414 >> radical = B*B-3*A*C >> RuntimeWarning: invalid value encountered in double_scalars >> Optimization terminated successfully. >> >> Current function value: nan >> Iterations: 7 >> Function evaluations: 109 >> Gradient evaluations: 42 >> fmin_cg returned res = [ nan nan] >> At end, zopt = [ nan nan] >> At end, f(zopt) = nan >> At end, fprime(zopt) = [ nan nan] >> At end, Inf-norm of fprime(zopt) = nan >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ibarak2000 at yahoo.com Mon Jul 30 11:09:04 2012 From: ibarak2000 at yahoo.com (ilan barak) Date: Mon, 30 Jul 2012 18:09:04 +0300 Subject: [SciPy-User] optimize.fmin_l_bfgs_b wrong number of arguments In-Reply-To: References: <1342974167.30673.YahooMailNeo@web125404.mail.ne1.yahoo.com> Message-ID: <5016A390.1000109@yahoo.com> Thanks Josef, you are correct ilan On 7/22/2012 7:30 PM, josef.pktd at gmail.com wrote: > On Sun, Jul 22, 2012 at 12:22 PM, ilan barak wrote: >> Hello, >> I apologize for the long description, this is the best I can do... >> >> I have two functions defined as: >> >> def hypothesis(params): >> # build a series of length params[0], with starting gap >> params[1] , repitition params[2] with sclae params[3] >> # using tt as the basic shape >> result=np.zeros(params[0]) >> for index in >> np.arange(params[1],params[0]-tt.shape[0],params[2]): >> result[int(round(index)):int(round(index))+tt.shape[0]]=tt >> return params[3]*result >> >> def Cost(params,sig): # starting gap params[0] , repitition >> params[1] with sclae params[2] >> # sig is signa;s to compare to >> # calculate error cost >> result=np.linalg.norm(sig-hypothesis([sig.shape[0]]+params)) >> return result >> >> The Cost function requires a 3 parameter list and a signal that is a >> 800 long ndarray >> Running the Cost with: >> params=([start,gap,0.04]) >> Cost(params,mysig5) , where mysig 5 is length 800 ndarray works >> fine. >> However: >> p0 = np.array([10.,20.,0.01]) # Initial guess for the parameters >> start,gap,scale >> mybounds = [(0,20), (10,25), (0.001,0.1)] >> x, f, d= optimize.fmin_l_bfgs_b(Cost, p0[:],fprime=None, >> args=mysig5, bounds=mybounds, approx_grad=True) > > args=(mysig5,) > > args should be a tuple > > I think > > Josef >> >> complains: >> >> Traceback >> >> C:\Users\ilan\Documents\python\opt_detection_danny1.py 130 >> fmin_l_bfgs_b >> C:\Python27\lib\site-packages\scipy\optimize\lbfgsb.py 199 >> func_and_grad >> C:\Python27\lib\site-packages\scipy\optimize\lbfgsb.py 145 >> TypeError: Cost() takes exactly 2 arguments (801 given) >> >> >> Where am I wrong >> >> thanks >> >> Ilan >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user From Wolfgang.Draxinger at physik.uni-muenchen.de Mon Jul 30 12:33:01 2012 From: Wolfgang.Draxinger at physik.uni-muenchen.de (Wolfgang Draxinger) Date: Mon, 30 Jul 2012 18:33:01 +0200 Subject: [SciPy-User] Updating SciPy from 0.9.0 to 0.10.1 triggers undesired behavior in NumPy 1.6.2 Message-ID: <20120730183301.4d5f172b@gar-ws-bl08.garching.physik.uni-muenchen.de> Hi, first my apologies for crossposting this to two maillists, but as both projects are affected I think this is in order. Like the subject says I encountered some undesired behavior in the interaction of SciPy with NumPy. Using the "old" SciPy version 0.9.0 everything works fine and smooth. But upgrading to SciPy-0.10.1 triggers some ValueError in numpy.histogram2d of NumPy-1.6.2 when executing one of my programs. I developed a numerical evaluation system for our detectors here. One of the key operations is determining the distribution of some 2-dimensional variable space based on the values found in the image delivered by the detector, where each pixel has associated values for the target variables. This goes something like the following ABdist, binsA, binsB = numpy.histogram2d( B_yz.ravel(), A_yz.ravel(), [B_bins, A_bins], weights=image.ravel() ) The bins parameter can be either [int, int] or [array, array], that makes no difference in the outcome. The mappings A_yz and B_yz are created using scipy.interpolate.griddata. We have a list of pairs of pairs which are determined by measurement. Basically in the calibration step we vary variables A,B and store at which Y,Z we get the corresponding signal. So essentially this is a (A,B) -> (Y,Z) mapping. In the region of interest is has a bijective subset that's also smooth. However the original data also contains areas where (Y,Z) has no corresponding (A,B) or where multiple (A,B) map to the same (Y,Z); like said, those lie outside the RoI. For our measurements we need to reverse this process, i.e. we want to do (Y,Z) -> (A,B). So I use griddata to evaluate a discrete reversal for this mapping, of the same dimensions that the to be evaluated image has: gry, grz = numpy.mgrid[self.y_lim[0]:self.y_lim[1]:self.y_res*1j, self.z_lim[0]:self.z_lim[1]:self.z_res*1j] # for whatever reason I have to do the following # assigning to evalgrid directly breaks the program. evalgrid = (gry, grz) points = (Y.ravel(), Z.ravel()) def gd(a): return scinp.griddata( points, a.ravel(), evalgrid, method='cubic' ) A_yz = gd(A) B_yz = gd(B) where A,B,Y,Z have the same dimensions and are the ordered lists/arrays of the scalar values of the two sets mapped between. As you can see, this approach does also involve the elements of the sets, which are not mapped bijectively. As lying outside the convex boundary or not being properly interpolatable they should receive the fill value. As long as I stay with SciPy-0.9.0 everything works fine. However after upgrading to SciPy-0.10.1 the histogram2d step fails with a ValueError. The version of NumPy is 1.6.2 for both cases. /usr/lib64/python2.7/site-packages/numpy/ma/core.py:772: RuntimeWarning: invalid value encountered in absolute return umath.absolute(a) * self.tolerance >= umath.absolute(b) Traceback (most recent call last): File "./ephi.py", line 71, in ABdist, binsA, binsB = numpy.histogram2d(B_yz.ravel(), A_yz.ravel(), [B_bins, A_bins], weights=image.ravel()) File "/usr/lib64/python2.7/site-packages/numpy/lib/twodim_base.py", line 615, in histogram2d hist, edges = histogramdd([x,y], bins, range, normed, weights) File "/usr/lib64/python2.7/site-packages/numpy/lib/function_base.py", line 357, in histogramdd decimal = int(-log10(mindiff)) + 6 ValueError: cannot convert float NaN to integer Any ideas on this? Regards, Wolfgang -- Fakult?t f?r Physik, LMU M?nchen Beschleunigerlabor/MLL Am Coulombwall 6 85748 Garching Deutschland Tel: +49-89-289-14286 Fax: +49-89-289-14280 From sturla at molden.no Mon Jul 30 15:15:14 2012 From: sturla at molden.no (Sturla Molden) Date: Mon, 30 Jul 2012 21:15:14 +0200 Subject: [SciPy-User] SciPy bugtracker on GitHub (a la NumPy)? Message-ID: <5016DD42.5090404@molden.no> (This might have been discussed before, forgive me if I've missed it.) NumPy is using "Issues" on GitHub as bugtracker in addition to trac it seems. It has several advantages. Everything is kept in one place on GitHub. The interface is more tidy. And pull requests can be attached to "issues" (cf. tickets with attached .diff files on trac). In my experience the patches contributed on trac often get "forgotten". Someone has to take time to get them into git/svn. (Which quickly discouraged me from contributing anything.) I think this is the same reason Ralph Gommers complained when I mailed cKDTree code here instead of using git myself: "I know you may not have the time or interest to learn about git right now, but it may make both our lives easier if you try the below steps. It will allow you to put your commits on top of mine without any manual copying." I guess that applies to anything attached to a ticket on trac as well? When SciPy is on GitHub I think we should consider using GitHub's issue tracker. (And NumPy is already using it.) Sturla From bnuttall at uky.edu Mon Jul 30 15:25:33 2012 From: bnuttall at uky.edu (Nuttall, Brandon C) Date: Mon, 30 Jul 2012 19:25:33 +0000 Subject: [SciPy-User] Classification using neural networks Message-ID: <77F5D06112589B4BB08A7C881381AB7604D3827F@ex10mb06.ad.uky.edu> Thanks to all who took the time to respond to my inquiry on classification. I now have a lot of background material and ideas to explore. Brandon Nuttall, KRPG-1364 Kentucky Geological Survey www.uky.edu/kgs bnuttall at uky.edu (KGS, Mo-We) Brandon.nuttall at ky.gov (EEC,Th-Fr) 859-323-0544 859-684-7473 (cell) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Mon Jul 30 15:52:01 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 30 Jul 2012 21:52:01 +0200 Subject: [SciPy-User] scipy.pkgload() error In-Reply-To: <5011A0F2.9090708@sun.ac.za> References: <5011A0F2.9090708@sun.ac.za> Message-ID: On Thu, Jul 26, 2012 at 9:56 PM, Johann Rohwer wrote: > After upgrading my scipy installation to the latest source form git, > scipy.pkgload() no longer works and gives the following traceback: > > In [2]: scipy.pkgload() > --------------------------------------------------------------------------- > NameError Traceback (most recent call last) > in () > ----> 1 scipy.pkgload() > > /usr/local/lib/python2.7/dist-packages/numpy/__init__.pyc in > pkgload(*packages, **options) > 132 > 133 def pkgload(*packages, **options): > --> 134 loader = PackageLoader(infunc=True) > 135 return loader(*packages, **options) > 136 > > /usr/local/lib/python2.7/dist-packages/numpy/_import_tools.pyc in > __init__(self, verbose, infunc) > 15 self.parent_frame = frame = sys._getframe(_level) > 16 self.parent_name = > eval('__name__',frame.f_globals,frame.f_locals) > ---> 17 parent_path = > eval('__path__',frame.f_globals,frame.f_locals) > 18 if isinstance(parent_path, str): > 19 parent_path = [parent_path] > > in () > > NameError: name '__path__' is not defined > > In [3]: scipy.__version__ > Out[3]: '0.12.0.dev-8e918cd' > > > ------------------------------------------------------------------------------ > > I see the pkgload() method is actually called from within numpy (and not > scipy, although it's called from scipy), but am not sure whether this is > expected. Any ideas? > Must have been broken by e11bf1d24. Now scipy.pkgload() is just an alias for numpy.pkgload(), which simply looks broken. My proposal would be to delete pkgload from the scipy namespace. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Jul 30 16:29:40 2012 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 30 Jul 2012 22:29:40 +0200 Subject: [SciPy-User] scipy.pkgload() error In-Reply-To: References: <5011A0F2.9090708@sun.ac.za> Message-ID: 30.07.2012 21:52, Ralf Gommers kirjoitti: [clip] > My proposal would be to delete pkgload from the scipy namespace. Or maybe have it do __import__('scipy.' + pkgname) I had some old code that assumed pkgload would add the submodule to the main namespace, so maybe trying to restore this could be one option. Alternatively, just remove it... Pauli From david_baddeley at yahoo.com.au Mon Jul 30 20:42:23 2012 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Mon, 30 Jul 2012 17:42:23 -0700 (PDT) Subject: [SciPy-User] qHull zero division error, scipy 0.10.1, linux Message-ID: <1343695343.11264.YahooMailNeo@web113409.mail.gq1.yahoo.com> I get a zero division error following error when trying to use the qHull bindings in scipy.spatial, e.g. In [8]:?from scipy.spatial import qhull In [9]: qhull.Delaunay(randn(100, 2)) --------------------------------------------------------------------------- ZeroDivisionError ? ? ? ? ? ? ? ? ? ? ? ? Traceback (most recent call last) /home/david/ in () /usr/local/lib/python2.6/dist-packages/scipy-0.10.1-py2.6-linux-x86_64.egg/scipy/spatial/qhull.so in scipy.spatial.qhull.Delaunay.__init__ (scipy/spatial/qhull.c:4109)() /usr/local/lib/python2.6/dist-packages/scipy-0.10.1-py2.6-linux-x86_64.egg/scipy/spatial/qhull.so in scipy.spatial.qhull._construct_delaunay (scipy/spatial/qhull.c:1344)() ZeroDivisionError: float division All the tests involving the qhull module also fail with the same error if I run scipy.test(). full system details below: NumPy version 1.6.2 NumPy is installed in /usr/local/lib/python2.6/dist-packages/numpy-1.6.2-py2.6-linux-x86_64.egg/numpy SciPy version 0.10.1 SciPy is installed in /usr/local/lib/python2.6/dist-packages/scipy-0.10.1-py2.6-linux-x86_64.egg/scipy Python version 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) [GCC 4.4.3] nose version 1.1.2 Ubuntu 10.04 LTS x64 both numpy and scipy were installed using easy_install. before I download the source and start messing with qhull.pyx / the qhull sources themselves. I thought I'd ask if there were any known bugs / clues. many thanks,? David From jr at sun.ac.za Tue Jul 31 02:18:10 2012 From: jr at sun.ac.za (Johann Rohwer) Date: Tue, 31 Jul 2012 08:18:10 +0200 Subject: [SciPy-User] scipy.pkgload() error In-Reply-To: References: <5011A0F2.9090708@sun.ac.za> Message-ID: <1627248.DDPmKElzJ1@kruimel> On Monday 30 July 2012 22:29:40 Pauli Virtanen wrote: > 30.07.2012 21:52, Ralf Gommers kirjoitti: > [clip] > > > My proposal would be to delete pkgload from the scipy namespace. > > Or maybe have it do __import__('scipy.' + pkgname) > I had some old code that assumed pkgload would add the submodule to > the main namespace, so maybe trying to restore this could be one > option. Alternatively, just remove it... I agree that scipy.pkgload('io') and import scipy.io are probably not much different. However, I did find the convenience function scipy.pkgload() useful to load all the subpackages and it would be nice to have that option, although it's of course not a show stopper. --Johann E-pos vrywaringsklousule Hierdie e-pos mag vertroulike inligting bevat en mag regtens geprivilegeerd wees en is slegs bedoel vir die persoon aan wie dit geadresseer is. Indien u nie die bedoelde ontvanger is nie, word u hiermee in kennis gestel dat u hierdie dokument geensins mag gebruik, versprei of kopieer nie. Stel ook asseblief die sender onmiddellik per telefoon in kennis en vee die e-pos uit. Die Universiteit aanvaar nie aanspreeklikheid vir enige skade, verlies of uitgawe wat voortspruit uit hierdie e-pos en/of die oopmaak van enige l??s aangeheg by hierdie e-pos nie. E-mail disclaimer This e-mail may contain confidential information and may be legally privileged and is intended only for the person to whom it is addressed. If you are not the intended recipient, you are notified that you may not use, distribute or copy this document in any manner whatsoever. Kindly also notify the sender immediately by telephone, and delete the e-mail. The University does not accept liability for any damage, loss or expense arising from this e-mail and/or accessing any files attached to this e-mail. From helmrp at yahoo.com Tue Jul 31 07:53:10 2012 From: helmrp at yahoo.com (The Helmbolds) Date: Tue, 31 Jul 2012 04:53:10 -0700 (PDT) Subject: [SciPy-User] Issue with SciPy opmize fmin_cg In-Reply-To: References: Message-ID: <1343735590.90068.YahooMailNeo@web31812.mail.mud.yahoo.com> I think you are right. I'm using a 64-bit machine with Windows operating system, but Python and SciPy are in my 32-bit programs folder. >________________________________ >Message: 3 >Date: Sun, 29 Jul 2012 20:01:25 -0500 >From: Aronne Merrelli >Subject: Re: [SciPy-User] Optimize fmin_cg issue >To: SciPy Users List >I get the same result, but I get a very similar NaN result as the OP >if I try with a point farther from zero (try [100,150] instead of >[10,15]). That suggests this is a 32bit vs 64bit float overflow issue >- perhaps OP is using 32 bit and we are using 64 bit floats. So, I >doubt the algorithm has a bug per se, but the output message could be >improved. > >I'm not an expert at this part of SciPy, but my guess is that the >iteration loop just isn't checking for NaN values to determine the >"success" of the iterative loop. Looking inside fmin_cg in optimize.py >(moved to _minimize_cg() in the current github), I think what might be >happening is that the gnorm variable is NaN, and then this causes the >while loop to terminate since the while loop's conditional is (gnorm > >gtol), and that will be false. So the later code just sees an exit of >the while loop well short of the iteration maximum, and it only checks >the iteration limit and step size (alpha_k) to determine "success". > >[1] https://github.com/scipy/scipy/blob/master/scipy/optimize/optimize.py > >Cheers, >Aronne > >? -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Jul 31 11:37:26 2012 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 31 Jul 2012 16:37:26 +0100 Subject: [SciPy-User] B-spline basis functions? Message-ID: Hi all, I'd like to be able to do spline regression in patsy[1], which means that I need to be able to compute b-spline basis functions. I am not an initiate into the mysteries of practical spline computations, but I *think* the stuff in scipy.signal is not quite usable as is, because it's focused on doing interpolation directly rather than exposing the basis functions themselves? Specifically, to achieve feature parity with R [2], I need to be able to take - an arbitrary order - an arbitrary collection of knot positions (which may be irregularly spaced) - a vector x of points at which to evaluate the basis functions and spit out the value of each spline basis function evaluated at each point in the x vector. It looks like scipy.signal.bspline *might* be useful, but I can't quite tell? Or alternatively someone might have some code lying around to do this already? Basically I have a copy of Schumaker here and I'm hoping someone will save me from having to read it :-). -n [1] https://github.com/pydata/patsy/ [2] http://stat.ethz.ch/R-manual/R-devel/library/splines/html/bs.html http://stat.ethz.ch/R-manual/R-devel/library/splines/html/ns.html From charlesr.harris at gmail.com Tue Jul 31 11:46:14 2012 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 31 Jul 2012 09:46:14 -0600 Subject: [SciPy-User] B-spline basis functions? In-Reply-To: References: Message-ID: On Tue, Jul 31, 2012 at 9:37 AM, Nathaniel Smith wrote: > Hi all, > > I'd like to be able to do spline regression in patsy[1], which means > that I need to be able to compute b-spline basis functions. I am not > an initiate into the mysteries of practical spline computations, but I > *think* the stuff in scipy.signal is not quite usable as is, because > it's focused on doing interpolation directly rather than exposing the > basis functions themselves? > > Specifically, to achieve feature parity with R [2], I need to be able to > take > - an arbitrary order > - an arbitrary collection of knot positions (which may be irregularly > spaced) > - a vector x of points at which to evaluate the basis functions > and spit out the value of each spline basis function evaluated at each > point in the x vector. > > It looks like scipy.signal.bspline *might* be useful, but I can't > quite tell? Or alternatively someone might have some code lying around > to do this already? > > Basically I have a copy of Schumaker here and I'm hoping someone will > save me from having to read it :-). > > I have this floating around def splvander(x, deg, knots): """Vandermonde type matrix for splines. Returns a matrix whose columns are the values of the b-splines of deg `deg` associated with the knot sequence `knots` evaluated at the points `x`. Parameters ---------- x : array_like Points at which to evaluate the b-splines. deg : int Degree of the splines. knots : array_like List of knots. The convention here is that the interior knots have been extended at both ends by ``deg + 1`` extra knots. Returns ------- vander : ndarray Vandermonde like matrix of shape (m,n), where ``m = len(x)`` and ``m = len(knots) - deg - 1`` Notes ----- The knots exending the interior points are usually taken to be the same as the endpoints of the interval on which the spline will be evaluated. """ m = len(knots) - deg - 1 v = np.zeros((m, len(x))) d = np.eye(m, len(knots)) for i in range(m): v[i] = spl.splev(x, (knots, d[i], deg)) return v.T -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Jul 31 11:48:06 2012 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 31 Jul 2012 09:48:06 -0600 Subject: [SciPy-User] B-spline basis functions? In-Reply-To: References: Message-ID: On Tue, Jul 31, 2012 at 9:46 AM, Charles R Harris wrote: > > > On Tue, Jul 31, 2012 at 9:37 AM, Nathaniel Smith wrote: > >> Hi all, >> >> I'd like to be able to do spline regression in patsy[1], which means >> that I need to be able to compute b-spline basis functions. I am not >> an initiate into the mysteries of practical spline computations, but I >> *think* the stuff in scipy.signal is not quite usable as is, because >> it's focused on doing interpolation directly rather than exposing the >> basis functions themselves? >> >> Specifically, to achieve feature parity with R [2], I need to be able to >> take >> - an arbitrary order >> - an arbitrary collection of knot positions (which may be irregularly >> spaced) >> - a vector x of points at which to evaluate the basis functions >> and spit out the value of each spline basis function evaluated at each >> point in the x vector. >> >> It looks like scipy.signal.bspline *might* be useful, but I can't >> quite tell? Or alternatively someone might have some code lying around >> to do this already? >> >> Basically I have a copy of Schumaker here and I'm hoping someone will >> save me from having to read it :-). >> >> > I have this floating around > > def splvander(x, deg, knots): > """Vandermonde type matrix for splines. > > Returns a matrix whose columns are the values of the b-splines of deg > `deg` associated with the knot sequence `knots` evaluated at the points > `x`. > > Parameters > ---------- > x : array_like > Points at which to evaluate the b-splines. > deg : int > Degree of the splines. > knots : array_like > List of knots. The convention here is that the interior knots have > been extended at both ends by ``deg + 1`` extra knots. > > Returns > ------- > vander : ndarray > Vandermonde like matrix of shape (m,n), where ``m = len(x)`` and > ``m = len(knots) - deg - 1`` > > Notes > ----- > The knots exending the interior points are usually taken to be the same > as the endpoints of the interval on which the spline will be evaluated. > > """ > m = len(knots) - deg - 1 > v = np.zeros((m, len(x))) > d = np.eye(m, len(knots)) > for i in range(m): > v[i] = spl.splev(x, (knots, d[i], deg)) > return v.T > > With this import from scipy.interpolate import fitpack as spl Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Tue Jul 31 12:43:36 2012 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 31 Jul 2012 18:43:36 +0200 Subject: [SciPy-User] Updating SciPy from 0.9.0 to 0.10.1 triggers undesired behavior in NumPy 1.6.2 In-Reply-To: <20120730183301.4d5f172b@gar-ws-bl08.garching.physik.uni-muenchen.de> References: <20120730183301.4d5f172b@gar-ws-bl08.garching.physik.uni-muenchen.de> Message-ID: On Mon, Jul 30, 2012 at 6:33 PM, Wolfgang Draxinger < Wolfgang.Draxinger at physik.uni-muenchen.de> wrote: > Hi, > > first my apologies for crossposting this to two maillists, but as both > projects are affected I think this is in order. > > Like the subject says I encountered some undesired behavior in the > interaction of SciPy with NumPy. > > Using the "old" SciPy version 0.9.0 everything works fine and smooth. > But upgrading to SciPy-0.10.1 triggers some ValueError in > numpy.histogram2d of NumPy-1.6.2 when executing one of my programs. > > I developed a numerical evaluation system for our detectors here. One of > the key operations is determining the distribution of some 2-dimensional > variable space based on the values found in the image delivered by the > detector, where each pixel has associated values for the target > variables. This goes something like the following > > ABdist, binsA, binsB = numpy.histogram2d( > B_yz.ravel(), > A_yz.ravel(), > [B_bins, A_bins], > weights=image.ravel() ) > > The bins parameter can be either [int, int] or [array, array], that > makes no difference in the outcome. > > The mappings A_yz and B_yz are created using > scipy.interpolate.griddata. We have a list of pairs of pairs which are > determined by measurement. Basically in the calibration step we vary > variables A,B and store at which Y,Z we get the corresponding signal. > So essentially this is a (A,B) -> (Y,Z) mapping. In the region of > interest is has a bijective subset that's also smooth. However the > original data also contains areas where (Y,Z) has no corresponding > (A,B) or where multiple (A,B) map to the same (Y,Z); like said, those > lie outside the RoI. For our measurements we need to reverse this > process, i.e. we want to do (Y,Z) -> (A,B). > > So I use griddata to evaluate a discrete reversal for this > mapping, of the same dimensions that the to be evaluated image has: > > gry, grz = numpy.mgrid[self.y_lim[0]:self.y_lim[1]:self.y_res*1j, > self.z_lim[0]:self.z_lim[1]:self.z_res*1j] > # for whatever reason I have to do the following > # assigning to evalgrid directly breaks the program. > evalgrid = (gry, grz) > > points = (Y.ravel(), Z.ravel()) > > def gd(a): > return scinp.griddata( > points, > a.ravel(), > evalgrid, > method='cubic' ) > > A_yz = gd(A) > B_yz = gd(B) > > where A,B,Y,Z have the same dimensions and are the ordered lists/arrays > of the scalar values of the two sets mapped between. As you can see, > this approach does also involve the elements of the sets, which are not > mapped bijectively. As lying outside the convex boundary or not being > properly interpolatable they should receive the fill value. > > As long as I stay with SciPy-0.9.0 everything works fine. However after > upgrading to SciPy-0.10.1 the histogram2d step fails with a ValueError. > The version of NumPy is 1.6.2 for both cases. > > /usr/lib64/python2.7/site-packages/numpy/ma/core.py:772: RuntimeWarning: > invalid value encountered in absolute > return umath.absolute(a) * self.tolerance >= umath.absolute(b) > Traceback (most recent call last): > File "./ephi.py", line 71, in > ABdist, binsA, binsB = numpy.histogram2d(B_yz.ravel(), A_yz.ravel(), > [B_bins, A_bins], weights=image.ravel()) > File "/usr/lib64/python2.7/site-packages/numpy/lib/twodim_base.py", line > 615, in histogram2d > hist, edges = histogramdd([x,y], bins, range, normed, weights) > File "/usr/lib64/python2.7/site-packages/numpy/lib/function_base.py", > line 357, in histogramdd > decimal = int(-log10(mindiff)) + 6 > ValueError: cannot convert float NaN to integer > > Any ideas on this? > Looks to me like the issue only has to do with a change in griddata, and nothing with histogram2d. The return values from griddata must be different. I suggest that you compare the return values from griddata for 0.9.0 and 0.10.1 with your input data, then try to reduce that comparison to a small self-contained example that shows the difference. This will allow us to debug the issue further. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Tue Jul 31 14:20:31 2012 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 31 Jul 2012 14:20:31 -0400 Subject: [SciPy-User] B-spline basis functions? References: Message-ID: ... > Returns > ------- > vander : ndarray > Vandermonde like matrix of shape (m,n), where ``m = len(x)`` and > ``m = len(knots) - deg - 1`` Typo? n = len(x)? From jsseabold at gmail.com Tue Jul 31 14:25:15 2012 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 31 Jul 2012 14:25:15 -0400 Subject: [SciPy-User] B-spline basis functions? In-Reply-To: References: Message-ID: On Tue, Jul 31, 2012 at 11:37 AM, Nathaniel Smith wrote: > Hi all, > > I'd like to be able to do spline regression in patsy[1], which means > that I need to be able to compute b-spline basis functions. I am not > an initiate into the mysteries of practical spline computations, but I > *think* the stuff in scipy.signal is not quite usable as is, because > it's focused on doing interpolation directly rather than exposing the > basis functions themselves? > > Specifically, to achieve feature parity with R [2], I need to be able to take > - an arbitrary order > - an arbitrary collection of knot positions (which may be irregularly spaced) > - a vector x of points at which to evaluate the basis functions > and spit out the value of each spline basis function evaluated at each > point in the x vector. > > It looks like scipy.signal.bspline *might* be useful, but I can't > quite tell? Or alternatively someone might have some code lying around > to do this already? > Josef will know more about this. I think he cleaned it up to work with scipy instead of the segfaulting C code we had. We've been carrying it around for a while, but I haven't had a chance to brush up yet. https://github.com/statsmodels/statsmodels/blob/master/statsmodels/sandbox/bspline.py Skipper From robert.kern at gmail.com Tue Jul 31 15:18:03 2012 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 31 Jul 2012 20:18:03 +0100 Subject: [SciPy-User] Issue with SciPy opmize fmin_cg In-Reply-To: <1343735590.90068.YahooMailNeo@web31812.mail.mud.yahoo.com> References: <1343735590.90068.YahooMailNeo@web31812.mail.mud.yahoo.com> Message-ID: On Tue, Jul 31, 2012 at 12:53 PM, The Helmbolds wrote: > I think you are right. I'm using a 64-bit machine with Windows operating > system, but Python and SciPy are in my 32-bit programs folder. The 64-bitness of your machine or the 32-bitness of your Python has nothing to do with the size of your floats. Those only refer to the address space (the size of pointers). You will have to check your code. Do you explicitly use numpy.float32 arrays? > Message: 3 > Date: Sun, 29 Jul 2012 20:01:25 -0500 > From: Aronne Merrelli > Subject: Re: [SciPy-User] Optimize fmin_cg issue > To: SciPy Users List > I get the same result, but I get a very similar NaN result as the OP > if I try with a point farther from zero (try [100,150] instead of > [10,15]). That suggests this is a 32bit vs 64bit float overflow issue > - perhaps OP is using 32 bit and we are using 64 bit floats. So, I > doubt the algorithm has a bug per se, but the output message could be > improved. > > I'm not an expert at this part of SciPy, but my guess is that the > iteration loop just isn't checking for NaN values to determine the > "success" of the iterative loop. Looking inside fmin_cg in optimize.py > (moved to _minimize_cg() in the current github), I think what might be > happening is that the gnorm variable is NaN, and then this causes the > while loop to terminate since the while loop's conditional is (gnorm > > gtol), and that will be false. So the later code just sees an exit of > the while loop well short of the iteration maximum, and it only checks > the iteration limit and step size (alpha_k) to determine "success". > > [1] https://github.com/scipy/scipy/blob/master/scipy/optimize/optimize.py > > Cheers, > Aronne > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Robert Kern From btemperton at gmail.com Tue Jul 31 23:28:22 2012 From: btemperton at gmail.com (Ben Temperton) Date: Tue, 31 Jul 2012 20:28:22 -0700 Subject: [SciPy-User] calculating the mean for each factor (like tapply in R) Message-ID: Hi there, I've just moved from R to IPython and wondered if there was a good way of finding the means and/or variance of values in a dataframe given a factor e.g.: if df = x experiment 10 1 13 1 12 1 3 2 4 2 6 2 33 3 44 3 55 3 in tapply you would do: tapply(df$x, list(df$experiment), mean) tapply(df$x, list(df$experiment), var) I guess I can always loop through the array for each experiment type, but thought that this is the kind of functionality that would be included in a core library. Many thanks, Ben