From sergio_r at mail.com Wed Nov 4 07:58:25 2015 From: sergio_r at mail.com (Sergio Rojas) Date: Wed, 4 Nov 2015 13:58:25 +0100 Subject: [SciPy-User] Numerical precision: Scipy Vs Numpy Message-ID: """ Having fun with numerical precision/errors, a solution of the quadratic equation [a*x**2 + b*x + c == 0] with small values for the constant a is given an unexpected result when comparing SciPy and SymPy numerical results, which I hope an earthly kind soul could explain. The output is as follows (the bothering result is on the second root property): SciPy a*x1**2 + b*x1 + c = 16777217.0 a*x2**2 + b*x2 + c = 1.11022302463e-16 Root properties: (x1 + x2) + b/a = 0.0 x1*x2 - c/a = 0.0 ---- SymPy a*x1**2 + b*x1 + c = 16777217.0000000 a*x2**2 + b*x2 + c = 0 Root properties: (x1 + x2) + b/a = 0 x1*x2 - c/a = 0.125000000000000 """ import scipy def myprint(a, b, c, x1, x2): print('a*x1**2 + b*x1 + c = '), print(a*x1**2 + b*x1 + c) print('a*x2**2 + b*x2 + c = '), print(a*x2**2 + b*x2 + c) print("\t Root properties: ") print('(x1 + x2) + b/a = '), print((x1+x2) + float(b)/float(a)) print(' x1*x2 - c/a = '), print(x1*x2 - float(c)/float(a)) a = 1.0e-15 b = 10000.0 c = 1.0 coeff = [a, b, c] x1, x2 = scipy.roots(coeff) print("\t SciPy ") myprint(a, b, c, x1, x2) from sympy import * var('x') x1, x2 = solve(a*x**2 + b*x + c, x) print(" \t ---- \n \t SymPy ") myprint(a, b, c, x1, x2) # Thanks in advance, # Sergio From evgeny.burovskiy at gmail.com Wed Nov 4 08:11:12 2015 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Wed, 4 Nov 2015 13:11:12 +0000 Subject: [SciPy-User] Numerical precision: Scipy Vs Numpy In-Reply-To: References: Message-ID: On Wed, Nov 4, 2015 at 12:58 PM, Sergio Rojas wrote: > """ > Having fun with numerical precision/errors, a solution of the quadratic > equation [a*x**2 + b*x + c == 0] with small values for the > constant a is given an unexpected result when comparing > SciPy and SymPy numerical results, which I hope an earthly kind soul could > explain. The output is as follows (the bothering result > is on the second root property): > > SciPy > a*x1**2 + b*x1 + c = 16777217.0 > a*x2**2 + b*x2 + c = 1.11022302463e-16 > Root properties: > (x1 + x2) + b/a = 0.0 > x1*x2 - c/a = 0.0 > ---- > SymPy > a*x1**2 + b*x1 + c = 16777217.0000000 > a*x2**2 + b*x2 + c = 0 > Root properties: > (x1 + x2) + b/a = 0 > x1*x2 - c/a = 0.125000000000000 > """ > > import scipy > > def myprint(a, b, c, x1, x2): > print('a*x1**2 + b*x1 + c = '), > print(a*x1**2 + b*x1 + c) > print('a*x2**2 + b*x2 + c = '), > print(a*x2**2 + b*x2 + c) > print("\t Root properties: ") > print('(x1 + x2) + b/a = '), > print((x1+x2) + float(b)/float(a)) > print(' x1*x2 - c/a = '), > print(x1*x2 - float(c)/float(a)) > > a = 1.0e-15 > b = 10000.0 > c = 1.0 > > coeff = [a, b, c] > x1, x2 = scipy.roots(coeff) > > print("\t SciPy ") > myprint(a, b, c, x1, x2) > > from sympy import * > var('x') > x1, x2 = solve(a*x**2 + b*x + c, x) > > print(" \t ---- \n \t SymPy ") > myprint(a, b, c, x1, x2) > > # Thanks in advance, > # Sergio > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user This is a catastrophic loss of precision. For a way out, see, for example, http://math.stackexchange.com/questions/866331/how-to-implement-a-numerically-stable-solution-of-a-quadratic-equation Demo: In [1]: a, b, c = 1e-15, 1e4, 1.0 In [2]: D = b**2 - 4.*a*c In [3]: import numpy as np In [4]: 2.*c / (-b - np.sqrt(D)) Out[4]: -0.0001 In [5]: import sympy as sy In [6]: x = sy.var('x') In [7]: sy.solve(a*x**2 + b*x + c) Out[7]: [-1.00000000000000e+19, -0.000100000000000000] From davidmenhur at gmail.com Wed Nov 4 08:42:53 2015 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Wed, 4 Nov 2015 14:42:53 +0100 Subject: [SciPy-User] Numerical precision: Scipy Vs Numpy In-Reply-To: References: Message-ID: Minor comment: scipy.roots is actually numpy.roots. Not that it really matters, unless you want to look at the source. On 4 November 2015 at 13:58, Sergio Rojas wrote: > print(a*x1**2 + b*x1 + c) The right way of evaluating polynomials, specially when you have so wild values, is: print((a * x1 + b) * x1 + c) print((a * x2 + b) * x2 + c) Then, the values are 1 and 1.1 e-16 for SciPy and 1 and 0 for Sympy, so one root is good and the other is close. If you look at the values of the roots you will find that they are the same, -1e+19 -0.0001. One of the solutions gives you 1 because (a * x1 + b) * x1 when x is as large as -1e+19 is a numerical 0, and you don't have the accuracy to get it to -1: x1b = np.nextafter(x1, 2*x1) # Get the next floating point number towards -inf print((a * x1b + b) * x1b + c) 36379789.070917137 x1b = np.nextafter(x1, 0) # Let's see from the other side: print((a * x1b + b) * x1b + c) -18189893.035458561 So, essentially, you have the closest floating point number to the real root. If you need to work with these values, you should first normalise the polynomial such as a=1. If you need extra robustness, the classical advice is to fall back to the explicit formula, solving only the root that has the same sign as -b, and obtaining the other one from x1/x2 = -c/a. But as I said, in this case, it won't help because there is no better representation. It is possible that Numpy's linear algebra implementation is more robust than the classical method, though. They are computed solving the eigensystem of the companion matrix. For low degrees, you can work out the analytical solutions and see which one would be more stable. https://en.wikipedia.org/wiki/Companion_matrix /David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergio_r at mail.com Wed Nov 4 20:48:31 2015 From: sergio_r at mail.com (Sergio Rojas) Date: Thu, 5 Nov 2015 02:48:31 +0100 Subject: [SciPy-User] Numerical precision: Scipy Vs Numpy References: Message-ID: ?Following up on my question, thanks to Evgeni Burovski [ https://mail.scipy.org/pipermail/scipy-user/2015-November/036753.html ] and David [ https://mail.scipy.org/pipermail/scipy-user/2015-November/036754.html ] I just want to add that on Fortran 90, via real*16 (also called quad precision) the output for the roots are (not sure about the rearrangement proposed by David in this case): FORTRAN 90 (via real*16) a = 1.00000000362749372553872184710144211E-0015 b = 10000.000000000000000000000000000000 c = 1.0000000000000000000000000000000000 x1 = -9999999963725062876.1997883398821330 x2 = -1.00000000000000000000001000000003626E-0004 a*x1**2 + b*x1 + c = 0.0000000000000000000000000000000000 (a*x1 + b)*x1 + c = -1.05518034299496531738542345583594055E-0011 a*x2**2 + b*x2 + c = 0.0000000000000000000000000000000000 (a*x2 + b)*x2 + c = 0.0000000000000000000000000000000000 (x1 + x2) + b/a = 0.0000000000000000000000000000000000 x1*x2 - c/a = 0.0000000000000000000000000000000000 Sergio ? Sent:?Wednesday, November 04, 2015 at 1:58 PM From:?"Sergio Rojas" To:?scipy-user at scipy.org Subject:?Numerical precision: Scipy Vs Numpy """ Having fun with numerical precision/errors, a solution of the quadratic equation [a*x**2 + b*x + c == 0] with small values for the constant a is given an unexpected result when comparing SciPy and SymPy numerical results, which I hope an earthly kind soul could explain. The output is as follows (the bothering result is on the second root property): SciPy a*x1**2 + b*x1 + c = 16777217.0 a*x2**2 + b*x2 + c = 1.11022302463e-16 Root properties: (x1 + x2) + b/a = 0.0 x1*x2 - c/a = 0.0 ---- SymPy a*x1**2 + b*x1 + c = 16777217.0000000 a*x2**2 + b*x2 + c = 0 Root properties: (x1 + x2) + b/a = 0 x1*x2 - c/a = 0.125000000000000 """ import scipy def myprint(a, b, c, x1, x2): print('a*x1**2 + b*x1 + c = '), print(a*x1**2 + b*x1 + c) print('a*x2**2 + b*x2 + c = '), print(a*x2**2 + b*x2 + c) print("\t Root properties: ") print('(x1 + x2) + b/a = '), print((x1+x2) + float(b)/float(a)) print(' x1*x2 - c/a = '), print(x1*x2 - float(c)/float(a)) a = 1.0e-15 b = 10000.0 c = 1.0 coeff = [a, b, c] x1, x2 = scipy.roots(coeff) print("\t SciPy ") myprint(a, b, c, x1, x2) from sympy import * var('x') x1, x2 = solve(a*x**2 + b*x + c, x) print(" \t ---- \n \t SymPy ") myprint(a, b, c, x1, x2) # Thanks in advance, # Sergio From jmsachs at gmail.com Thu Nov 5 15:00:12 2015 From: jmsachs at gmail.com (Jason Sachs) Date: Thu, 5 Nov 2015 13:00:12 -0700 Subject: [SciPy-User] LTI simulation with time delay Message-ID: Does anyone know if there's a straightforward way to simulate LTI systems in Python with numpy/scipy if one of the systems contains a time delay? I have a SISO system like the one below where G(s) and H(s) are regular rational transfer functions, but F(s) = e^(-sT)/(tau*s+1), and I would like to simulate the step response. [image: Inline image 1] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 23967 bytes Desc: not available URL: From andrea.gavana at gmail.com Mon Nov 9 02:23:46 2015 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Mon, 9 Nov 2015 08:23:46 +0100 Subject: [SciPy-User] "Order of items" optimization Message-ID: Hello List, I am sure the problem I am trying to solve is fairly known but my English/Google skills are failing me today. The issue I am facing is to find the "best" (or "optimal") ordering of some items (and the order is important). These items can only be taken at most once - no repetition, although an item can also be deemed unimportant and not taken at all. So basically it would be a 0-1 coefficient for each item, but the order in which these items are taken is fundamental. I don't have big issues in writing the objective function for this kind of problem, but I am unclear about the solution method: I have looked at the Knapsack problem and bin packing problem, but I am not sure they completely apply to my situation. Any pointer, suggestion (and maybe if you have a link with some Python code somewhere) would be very appreciated :-) . Thank you in advance for your help :-) . Andrea. -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Mon Nov 9 06:12:00 2015 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Mon, 9 Nov 2015 12:12:00 +0100 Subject: [SciPy-User] "Order of items" optimization In-Reply-To: References: Message-ID: On 9 Nov 2015 08:23, "Andrea Gavana" wrote: > The issue I am facing is to find the "best" (or "optimal") ordering of some items (and the order is important). These items can only be taken at most once - no repetition, although an item can also be deemed unimportant and not taken at all. So basically it would be a 0-1 coefficient for each item, but the order in which these items are taken is fundamental. What you are doing is some variation of the travelling salesman problem. It is a very standard problem in computer science. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Mon Nov 9 23:28:05 2015 From: andyfaff at gmail.com (Andrew Nelson) Date: Tue, 10 Nov 2015 15:28:05 +1100 Subject: [SciPy-User] "Order of items" optimization In-Reply-To: References: Message-ID: You can create all the permutations by using the itertools.permutations generator. You could visit all the permutations in turn and calculate the objective function, selecting the best one. However, this is a brute force approach and its run time would depend on how many items you needed to look at. On 9 November 2015 at 22:12, Da?id wrote: > > On 9 Nov 2015 08:23, "Andrea Gavana" wrote: > > The issue I am facing is to find the "best" (or "optimal") ordering of > some items (and the order is important). These items can only be taken at > most once - no repetition, although an item can also be deemed unimportant > and not taken at all. So basically it would be a 0-1 coefficient for each > item, but the order in which these items are taken is fundamental. > > What you are doing is some variation of the travelling salesman problem. > It is a very standard problem in computer science. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > > -- _____________________________________ Dr. Andrew Nelson _____________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.gavana at gmail.com Tue Nov 10 01:07:23 2015 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Tue, 10 Nov 2015 07:07:23 +0100 Subject: [SciPy-User] "Order of items" optimization In-Reply-To: References: Message-ID: Hi, On Tuesday, 10 November 2015, Andrew Nelson wrote: > You can create all the permutations by using the itertools.permutations > generator. You could visit all the permutations in turn and calculate the > objective function, selecting the best one. However, this is a brute force > approach and its run time would depend on how many items you needed to look > at. > > On 9 November 2015 at 22:12, Da?id > wrote: > >> >> On 9 Nov 2015 08:23, "Andrea Gavana" > > wrote: >> > The issue I am facing is to find the "best" (or "optimal") ordering of >> some items (and the order is important). These items can only be taken at >> most once - no repetition, although an item can also be deemed unimportant >> and not taken at all. So basically it would be a 0-1 coefficient for each >> item, but the order in which these items are taken is fundamental. >> >> What you are doing is some variation of the travelling salesman problem. >> It is a very standard problem in computer science. >> > I thought about that, the only problem is that I don't have an initial "city", i.e., all the events can be the first point. That would mean solving a TSP problem for each event as starting point - a few optimizations... Generating all the permutations is not feasible either - I have 118 events to permute, and that gives an enormous number of possible permutations... Thank you again for providing your suggestions! Andrea. > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> >> https://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > > -- > _____________________________________ > Dr. Andrew Nelson > > > _____________________________________ > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Tue Nov 10 04:09:20 2015 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Tue, 10 Nov 2015 10:09:20 +0100 Subject: [SciPy-User] "Order of items" optimization In-Reply-To: References: Message-ID: On 10 November 2015 at 07:07, Andrea Gavana wrote: > > I thought about that, the only problem is that I don't have an initial > "city", i.e., all the events can be the first point. That would mean > solving a TSP problem for each event as starting point - a few > optimizations... > Your starting point can be a "virtual" city that is equidistant to all the others. This distance just has to be large enough so you don't take that path back. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfmoraes at cti.gov.br Tue Nov 10 04:46:30 2015 From: tfmoraes at cti.gov.br (Thiago Franco Moraes) Date: Tue, 10 Nov 2015 09:46:30 +0000 Subject: [SciPy-User] "Order of items" optimization In-Reply-To: References: Message-ID: On Tue, Nov 10, 2015 at 7:09 AM Da?id wrote: > On 10 November 2015 at 07:07, Andrea Gavana > wrote: > >> >> I thought about that, the only problem is that I don't have an initial >> "city", i.e., all the events can be the first point. That would mean >> solving a TSP problem for each event as starting point - a few >> optimizations... >> > > Your starting point can be a "virtual" city that is equidistant to all the > others. This distance just has to be large enough so you don't take that > path back. > > You could try to generate a minimun spanning tree since MST is an aproximation to the traveling salesman problem [1] [1] - https://www.ics.uci.edu/~eppstein/161/960206.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From elmar at net4werling.de Wed Nov 11 13:12:13 2015 From: elmar at net4werling.de (elmar werling) Date: Wed, 11 Nov 2015 19:12:13 +0100 Subject: [SciPy-User] array creation Message-ID: Hi, is there a function foo(ncols, min, max, delta) in mumpy/scipy to create an array such as 0 100 0 0 0 100 10 90 0 10 80 10 10 70 20 10 60 30 10 50 40 10 40 50 10 30 60 20 20 70 20 10 80 20 0 90 20 80 0 ... ... ... 90 10 0 90 0 10 100 0 0 Any help or link is welcome Elmar From max at shron.net Wed Nov 11 13:44:46 2015 From: max at shron.net (Max Shron) Date: Wed, 11 Nov 2015 13:44:46 -0500 Subject: [SciPy-User] array creation In-Reply-To: References: Message-ID: Can you specify a little more clearly what you're looking for? I'm not seeing the pattern. On Wed, Nov 11, 2015 at 1:12 PM, elmar werling wrote: > Hi, > > is there a function foo(ncols, min, max, delta) in mumpy/scipy to create > an array such as > > 0 100 0 > 0 0 100 > 10 90 0 > 10 80 10 > 10 70 20 > 10 60 30 > 10 50 40 > 10 40 50 > 10 30 60 > 20 20 70 > 20 10 80 > 20 0 90 > 20 80 0 > ... ... ... > 90 10 0 > 90 0 10 > 100 0 0 > > Any help or link is welcome > > Elmar > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elmar at net4werling.de Wed Nov 11 14:14:23 2015 From: elmar at net4werling.de (elmar werling) Date: Wed, 11 Nov 2015 20:14:23 +0100 Subject: [SciPy-User] array creation In-Reply-To: References: Message-ID: I need a generalized function foo(ncols, min, max, delta) for # two component mixing ratios (ncols=2, min=0, max=1, delta=0.1) for x1 in np.arange(0.0, 1.05, 0.1): x2 = 1.0 - x1 print (x1, x2, ' = ', x1+x2) # three component mixing ratios (ncols=3, min=0, max=1, delta=0.1) for x1 in np.arange(0.0, 1.05, 0.1): for x2 in np.arange(1.0 - x1, -0.01, -0.1): x3 = 1.0 - x1 - x2 print(x1, x2, x3, ' = ', x1+x2+x3) # four component mixing ratios (ncols=3, min=0, max=1, delta=0.1) for x1 in np.arange(0.0, 1.05, 0.1): for x2 in np.arange(1.0 - x1, -0.01, -0.1): for x3 in np.arange(1.0 - x1 - x2, -0.01, -0.1): x4 = 1.0 - x1 - x2 - x3 print(x1, x2, x3, x4, ' = ', x1+x2+x3+x4) On 11.11.2015 19:44, Max Shron wrote: > Can you specify a little more clearly what you're looking for? I'm not > seeing the pattern. > > On Wed, Nov 11, 2015 at 1:12 PM, elmar werling > wrote: > > Hi, > > is there a function foo(ncols, min, max, delta) in mumpy/scipy to > create an array such as > > 0 100 0 > 0 0 100 > 10 90 0 > 10 80 10 > 10 70 20 > 10 60 30 > 10 50 40 > 10 40 50 > 10 30 60 > 20 20 70 > 20 10 80 > 20 0 90 > 20 80 0 > ... ... ... > 90 10 0 > 90 0 10 > 100 0 0 > > Any help or link is welcome > > Elmar > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > From elmar at net4werling.de Wed Nov 11 14:17:42 2015 From: elmar at net4werling.de (elmar werling) Date: Wed, 11 Nov 2015 20:17:42 +0100 Subject: [SciPy-User] array creation In-Reply-To: References: Message-ID: with the following pattern the pattern is 0 100 0 0 0 100 10 90 0 10 80 10 10 70 20 10 60 30 10 50 40 10 40 50 10 30 60 10 20 70 10 10 80 10 0 90 20 80 0 20 70 10 20 60 20 20 50 30 20 40 40 20 30 50 20 20 60 20 10 70 20 0 80 30 70 0 30 60 10 30 50 20 30 40 30 30 30 40 30 20 50 30 10 60 30 0 70 ... ... ... On 11.11.2015 19:44, Max Shron wrote: > Can you specify a little more clearly what you're looking for? I'm not > seeing the pattern. > > On Wed, Nov 11, 2015 at 1:12 PM, elmar werling > wrote: > > Hi, > > is there a function foo(ncols, min, max, delta) in mumpy/scipy to > create an array such as > > 0 100 0 > 0 0 100 > 10 90 0 > 10 80 10 > 10 70 20 > 10 60 30 > 10 50 40 > 10 40 50 > 10 30 60 > 20 20 70 > 20 10 80 > 20 0 90 > 20 80 0 > ... ... ... > 90 10 0 > 90 0 10 > 100 0 0 > > Any help or link is welcome > > Elmar > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > From guziy.sasha at gmail.com Wed Nov 11 16:50:36 2015 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Wed, 11 Nov 2015 16:50:36 -0500 Subject: [SciPy-User] array creation In-Reply-To: References: Message-ID: Elmar: You could use smth like this.... import itertools as itt import numpy as np def foo(ncol=1, start=0, end=10, delta=0.1): if ncol==1: return np.arange(start, end, delta) else: ranges = ncol * [np.arange(start, end, delta), ] for z in itt.product(*ranges): yield z for (x1, x2) in foo(ncol=2): print(x1, x2) This is not exactly what you need but might help. Cheers 2015-11-11 14:17 GMT-05:00 elmar werling : > with the following pattern > > the pattern is > > 0 100 0 > 0 0 100 > 10 90 0 > 10 80 10 > 10 70 20 > 10 60 30 > 10 50 40 > 10 40 50 > 10 30 60 > 10 20 70 > 10 10 80 > 10 0 90 > 20 80 0 > 20 70 10 > 20 60 20 > 20 50 30 > 20 40 40 > 20 30 50 > 20 20 60 > 20 10 70 > 20 0 80 > 30 70 0 > 30 60 10 > 30 50 20 > 30 40 30 > 30 30 40 > 30 20 50 > 30 10 60 > 30 0 70 > ... ... ... > > On 11.11.2015 19:44, Max Shron wrote: > >> Can you specify a little more clearly what you're looking for? I'm not >> seeing the pattern. >> >> On Wed, Nov 11, 2015 at 1:12 PM, elmar werling > > wrote: >> >> Hi, >> >> is there a function foo(ncols, min, max, delta) in mumpy/scipy to >> create an array such as >> >> 0 100 0 >> 0 0 100 >> 10 90 0 >> 10 80 10 >> 10 70 20 >> 10 60 30 >> 10 50 40 >> 10 40 50 >> 10 30 60 >> 20 20 70 >> 20 10 80 >> 20 0 90 >> 20 80 0 >> ... ... ... >> 90 10 0 >> 90 0 10 >> 100 0 0 >> >> Any help or link is welcome >> >> Elmar >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> https://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> https://mail.scipy.org/mailman/listinfo/scipy-user >> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > -- Sasha -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Nov 11 17:01:11 2015 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Nov 2015 22:01:11 +0000 Subject: [SciPy-User] array creation In-Reply-To: References: Message-ID: On Wed, Nov 11, 2015 at 6:12 PM, elmar werling wrote: > > Hi, > > is there a function foo(ncols, min, max, delta) in mumpy/scipy to create an array such as > > 0 100 0 > 0 0 100 > 10 90 0 > 10 80 10 > 10 70 20 > 10 60 30 > 10 50 40 > 10 40 50 > 10 30 60 > 20 20 70 > 20 10 80 > 20 0 90 > 20 80 0 > ... ... ... > 90 10 0 > 90 0 10 > 100 0 0 > > Any help or link is welcome I answered a very similar question a couple of years ago: https://mail.scipy.org/pipermail/numpy-discussion/2013-September/067841.html -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From elmar at net4werling.de Thu Nov 12 03:05:12 2015 From: elmar at net4werling.de (elmar werling) Date: Thu, 12 Nov 2015 09:05:12 +0100 Subject: [SciPy-User] array creation In-Reply-To: References: Message-ID: thank you for help import itertools import numpy as np ncols = 3 start, stop, step = 0.0, 1.0, 0.2 iterable = np.arange(start, stop+step/2, step) mixing_ratios = itertools.product(iterable, repeat=ncols) mixing_ratios = [i for i in mixing_ratios if np.isclose(sum(i), 1.0)] On 11.11.2015 19:12, elmar werling wrote: > Hi, > > is there a function foo(ncols, min, max, delta) in mumpy/scipy to create > an array such as > > 0 100 0 > 0 0 100 > 10 90 0 > 10 80 10 > 10 70 20 > 10 60 30 > 10 50 40 > 10 40 50 > 10 30 60 > 20 20 70 > 20 10 80 > 20 0 90 > 20 80 0 > ... ... ... > 90 10 0 > 90 0 10 > 100 0 0 > > Any help or link is welcome > > Elmar From shoyer at gmail.com Thu Nov 12 15:51:13 2015 From: shoyer at gmail.com (Stephan Hoyer) Date: Thu, 12 Nov 2015 12:51:13 -0800 Subject: [SciPy-User] ANN: properscoring: proper scoring rules in Python Message-ID: I'm pleased to announce the release of a new open source package, properscoring, for calculating proper scoring rules in Python: https://github.com/TheClimateCorporation/properscoring Evaluation methods that are "strictly proper" cannot be artificially improved through hedging, which makes them fair methods for accessing the accuracy of probabilistic forecasts. These methods are useful for evaluating machine learning or statistical models that produce probabilities instead of point estimates. In particular, these rules are often used for evaluating weather forecasts. properscoring currently contains optimized and extensively tested routines for calculating the Continuous Ranked Probability Score (CRPS) and the Brier Score: - CRPS for an ensemble forecast - CRPS for a Gaussian distribution - CRPS for an arbitrary cumulative distribution function - Brier score for binary probability forecasts - Brier score for threshold exceedances with an ensemble forecast If you're interested in these types of metrics, we'd love to hear your thoughts on this package. Cheers, Stephan -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Nov 12 16:11:14 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 12 Nov 2015 14:11:14 -0700 Subject: [SciPy-User] Numpy 1.10.2rc1 Message-ID: Hi All, I am pleased to announce the release of Numpy 1.10.2rc1. This release should fix the problems exposed in 1.10.1, which is not to say there are no remaining problems. Please test this thoroughly, exspecially if you experienced problems with 1.10.1. Julian Taylor has opened an issue relating to cblas detection on Debian (and probably Debian derived distributions) that is not dealt with in this release. Hopefully a solution will be available before the final. To all who reported issues with 1.10.1 and to those who helped close them, a big thank you. Source and binary files may be found on Sourceforge . Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From markvanderw at gmail.com Wed Nov 18 09:06:07 2015 From: markvanderw at gmail.com (Mark vdw) Date: Wed, 18 Nov 2015 14:06:07 +0000 Subject: [SciPy-User] Access to objective function and gradient in minimize callback Message-ID: Hi all, I want to keep track of the optimisation history when using minimize. I want to keep track of the objective function value and the gradient. However, the callback function is only given the current parameters, not the actual value of the objective function and its gradient. So whenever I want to store the current fval and gradient, I have to recompute them in the callback, which is wasteful, especially in the case where I want to store the values at every iteration. Is there a way to get the fval and gradient that have already been computed inside the optimiser to the callback function? Many thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Wed Nov 18 09:20:40 2015 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Wed, 18 Nov 2015 15:20:40 +0100 Subject: [SciPy-User] Access to objective function and gradient in minimize callback In-Reply-To: References: Message-ID: On 18 November 2015 at 15:06, Mark vdw wrote: > Is there a way to get the fval and gradient that have already been > computed inside the optimiser to the callback function? > This is what I did in a similar scenario: make your objective function a callable class, and on each call save the value and the result. The callback will check if the parameters passed are the same as the last time it was called, and used the cached value. -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at damcb.com Wed Nov 18 09:23:17 2015 From: guillaume at damcb.com (Guillaume Gay) Date: Wed, 18 Nov 2015 15:23:17 +0100 Subject: [SciPy-User] Access to objective function and gradient in minimize callback In-Reply-To: References: Message-ID: <564C89D5.4040005@damcb.com> Hi, Couldn't you store the associated data during the optimization, as part of the `fun` argument of minimize, which is a function? Best, Guilllaume Le 18/11/2015 15:06, Mark vdw a ?crit : > Hi all, > > I want to keep track of the optimisation history when using minimize. > I want to keep track of the objective function value and the gradient. > However, the callback function is only given the current parameters, > not the actual value of the objective function and its gradient. So > whenever I want to store the current fval and gradient, I have to > recompute them in the callback, which is wasteful, especially in the > case where I want to store the values at every iteration. > > Is there a way to get the fval and gradient that have already been > computed inside the optimiser to the callback function? > > Many thanks, > Mark > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user -- Guillaume Gay, Data analysis and modeling in Comutational Biolgy http://damcb.com 43 rue Horace Bertin 13005 Marseille +33 953 55 98 89 +33 651 95 94 00 n?SIRET 751 175 233 00020 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gustavo.goretkin at gmail.com Wed Nov 18 09:24:21 2015 From: gustavo.goretkin at gmail.com (Gustavo Goretkin) Date: Wed, 18 Nov 2015 09:24:21 -0500 Subject: [SciPy-User] linear objective, non-linear constraint Message-ID: I have what is a linear program, except for a non-linear constraint of the form S^2 + C^2 = 1. The other constraints are linear, as is the objective. I'm trying to use optimize.minimize interface and the SLSQP solver (which I believe is the only solver that supports non-linear and equality constraints), but I get status: 6 success: False njev: 1 nfev: 1 fun: -0.0 x: array([ 5., 5., 1., 0., 0., 6., 6., 4., 4., 6., 4., 4., 6.]) message: 'Singular matrix C in LSQ subproblem' jac: array([-0., -0., -0., -0., -1., -0., -0., -0., -0., -0., -0., -0., -0., 0.]) nit: 1 Is this because the objective is not positive definite? Or is there another reason? I've looked below, but I don't see where in the fortan code that the mode gets set to 6 https://github.com/scipy/scipy/blob/v0.16.1/scipy/optimize/slsqp/slsqp_optmz.f https://github.com/scipy/scipy/blob/a9fb36bc44bad4bbd2c1a41cb43c6f10925b38ae/scipy/optimize/slsqp.py#L405 -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin.trendelkampschroer at gmail.com Thu Nov 19 04:55:07 2015 From: benjamin.trendelkampschroer at gmail.com (Benjamin Trendelkamp-Schroer) Date: Thu, 19 Nov 2015 10:55:07 +0100 Subject: [SciPy-User] linear objective, non-linear constraint In-Reply-To: References: Message-ID: <564D9C7B.1090009@gmail.com> Are S, and C the only unknowns? Can you give your other constraints? Benjamin On 18.11.2015 15:24, Gustavo Goretkin wrote: > I have what is a linear program, except for a non-linear constraint of > the form > S^2 + C^2 = 1. The other constraints are linear, as is the objective. > > I'm trying to use optimize.minimize interface and the SLSQP solver > (which I believe is the only solver that supports non-linear and > equality constraints), but I get > > status: 6 > success: False > njev: 1 > nfev: 1 > fun: -0.0 > x: array([ 5., 5., 1., 0., 0., 6., 6., 4., 4., 6., 4., 4., 6.]) > message: 'Singular matrix C in LSQ subproblem' > jac: array([-0., -0., -0., -0., -1., -0., -0., -0., -0., -0., -0., -0., -0., 0.]) > nit: 1 > > Is this because the objective is not positive definite? Or is there another reason? > > I've looked below, but I don't see where in the fortan code that the mode gets set to 6 > https://github.com/scipy/scipy/blob/v0.16.1/scipy/optimize/slsqp/slsqp_optmz.f > https://github.com/scipy/scipy/blob/a9fb36bc44bad4bbd2c1a41cb43c6f10925b38ae/scipy/optimize/slsqp.py#L405 > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > -- Benjamin Trendelkamp-Schroer Fritschestrasse 24 10585 Berlin From ewm at redtetrahedron.org Thu Nov 19 07:25:35 2015 From: ewm at redtetrahedron.org (Eric Moore) Date: Thu, 19 Nov 2015 07:25:35 -0500 Subject: [SciPy-User] linear objective, non-linear constraint In-Reply-To: References: Message-ID: On Wednesday, November 18, 2015, Gustavo Goretkin < gustavo.goretkin at gmail.com> wrote: > I have what is a linear program, except for a non-linear constraint of the > form > S^2 + C^2 = 1. The other constraints are linear, as is the objective. > > I'm trying to use optimize.minimize interface and the SLSQP solver (which > I believe is the only solver that supports non-linear and equality > constraints), but I get > > status: 6 > success: False > njev: 1 > nfev: 1 > fun: -0.0 > x: array([ 5., 5., 1., 0., 0., 6., 6., 4., 4., 6., 4., 4., 6.]) > message: 'Singular matrix C in LSQ subproblem' > jac: array([-0., -0., -0., -0., -1., -0., -0., -0., -0., -0., -0., -0., -0., 0.]) > nit: 1 > > Is this because the objective is not positive definite? Or is there another reason? > > I've looked below, but I don't see where in the fortan code that the mode gets set to 6 > https://github.com/scipy/scipy/blob/v0.16.1/scipy/optimize/slsqp/slsqp_optmz.f > https://github.com/scipy/scipy/blob/a9fb36bc44bad4bbd2c1a41cb43c6f10925b38ae/scipy/optimize/slsqp.py#L405 > > Generally speaking, people are more easily able to help you if you post the code that fails, reduced as much as possible to isolate the problem. It is hard to say what you should change from what you have given us. If that is your only difficult constraint, you could always replace the C and S by a new variable t, each use of C by cos(t) and S by sin(t). This new variable could be then be unconstrained. This doesn't solve your current issue though it may make setting up the problem a little easier. Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Nov 19 07:35:28 2015 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 19 Nov 2015 12:35:28 +0000 (UTC) Subject: [SciPy-User] linear objective, non-linear constraint References: Message-ID: Wed, 18 Nov 2015 09:24:21 -0500, Gustavo Goretkin kirjoitti: > I'm trying to use optimize.minimize interface and the SLSQP solver > (which I believe is the only solver that supports non-linear and > equality constraints), but I get > > status: 6 > success: False > njev: 1 nfev: 1 > fun: -0.0 > x: array([ 5., 5., 1., 0., 0., 6., 6., 4., 4., 6., > 4., 4., 6.]) > message: 'Singular matrix C in LSQ subproblem' One thing potentially worth trying is to give upper and lower bounds for all of the variables. The SLSQP solver as it currently is simulates unconstrained variables by using some large numeric values for the upper/lower bounds --- these might cause numerical problems for the constraints. From rob.clewley at gmail.com Thu Nov 19 09:37:59 2015 From: rob.clewley at gmail.com (Rob Clewley) Date: Thu, 19 Nov 2015 09:37:59 -0500 Subject: [SciPy-User] Access to objective function and gradient in minimize callback In-Reply-To: References: Message-ID: Hi Mark, On Wed, Nov 18, 2015 at 9:06 AM, Mark vdw wrote: > Is there a way to get the fval and gradient that have already been computed > inside the optimiser to the callback function? You basically want a memoized version aka a cache. That's not too tricky to write yourself but PyDSTool's optimization toolbox package (mostly code donated by Matthieu Brucher) already does this nicely. It also keeps a history of all the steps in case you need to do diagnostics or backtrack. From gustavo.goretkin at gmail.com Thu Nov 19 15:53:39 2015 From: gustavo.goretkin at gmail.com (Gustavo Goretkin) Date: Thu, 19 Nov 2015 15:53:39 -0500 Subject: [SciPy-User] linear objective, non-linear constraint In-Reply-To: References: Message-ID: Sorry for not including the whole problem originally. I was hoping someone could tell me what a singular matrix C means in terms of the optimization problem. In any case, here is the setup of the optimization problem that is giving me trouble: https://gist.github.com/goretkin/e54819eb0b3831d09daa I tried adding tight-ish bounds as suggested (+/- 100 on each decision variable), but that didn't help Here is a simplified version, which does work. It's a linear program. https://gist.github.com/goretkin/acc92375430bbe374b0c For what is worth, here is my attempt at modeling the same problem in Julia's JuMP: https://gist.github.com/goretkin/facccca9d99b6f55c175. Using IPOPT, the solver converges to what I expect. On Thu, Nov 19, 2015 at 7:35 AM, Pauli Virtanen wrote: > Wed, 18 Nov 2015 09:24:21 -0500, Gustavo Goretkin kirjoitti: > > I'm trying to use optimize.minimize interface and the SLSQP solver > > (which I believe is the only solver that supports non-linear and > > equality constraints), but I get > > > > status: 6 > > success: False > > njev: 1 nfev: 1 > > fun: -0.0 > > x: array([ 5., 5., 1., 0., 0., 6., 6., 4., 4., 6., > > 4., 4., 6.]) > > message: 'Singular matrix C in LSQ subproblem' > > One thing potentially worth trying is to give upper and lower bounds for > all of the variables. > > The SLSQP solver as it currently is simulates unconstrained variables by > using some large numeric values for the upper/lower bounds --- these > might cause numerical problems for the constraints. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.puiseux at univ-pau.fr Fri Nov 20 03:37:44 2015 From: pierre.puiseux at univ-pau.fr (Puiseux Pierre) Date: Fri, 20 Nov 2015 09:37:44 +0100 Subject: [SciPy-User] Gmres preconditionner Message-ID: <461DA54F-3363-482D-845E-9DC2855A17FE@univ-pau.fr> Hello, i try to solve a linear sparse system using scipy.sparse.linalg.gmres. >>> w, v = gmres(A, b) It works fine. Now i?m trying to improve the iterations, using scipy.sparse.linalg.spilu preconditionner : >>> invA = spilu(A) >>> w,v = gmres(A, b, M=invA) File "", line 2, in gmres File "/usr/local/lib/python2.7/site-packages/scipy/sparse/linalg/isolve/iterative.py", line 85, in non_reentrant return func(*a, **kw) File "/usr/local/lib/python2.7/site-packages/scipy/sparse/linalg/isolve/iterative.py", line 422, in gmres A,M,x,b,postprocess = make_system(A,M,x0,b,xtype) File "/usr/local/lib/python2.7/site-packages/scipy/sparse/linalg/isolve/utils.py", line 131, in make_system M = aslinearoperator(M) File "/usr/local/lib/python2.7/site-packages/scipy/sparse/linalg/interface.py", line 682, in aslinearoperator raise TypeError('type not understood') TypeError: type not understood Do i mistake somewhere ? Thanks for your answer. Pierre Puiseux -------------- next part -------------- An HTML attachment was scrubbed... URL: From ecarlson at eng.ua.edu Fri Nov 20 12:51:28 2015 From: ecarlson at eng.ua.edu (Eric Carlson) Date: Fri, 20 Nov 2015 11:51:28 -0600 Subject: [SciPy-User] Gmres preconditionner In-Reply-To: <461DA54F-3363-482D-845E-9DC2855A17FE@univ-pau.fr> References: <461DA54F-3363-482D-845E-9DC2855A17FE@univ-pau.fr> Message-ID: A little tougher to track down than I thought, but you are missing a couple of steps: P = scipy.sparse.linalg.spilu(matrix, fill_factor=int(math.sqrt(m))) M_x = lambda x: P.solve(x) M = scipy.sparse.linalg.LinearOperator((n * m, n * m), M_x) #(n*m,n*m)==matrix shape result = scipy.sparse.linalg.lgmres(matrix, b, tol=1e-5, M=M) On 11/20/2015 2:37 AM, Puiseux Pierre wrote: > > Hello, > > > i try to solve a linear sparse system using scipy.sparse.linalg.gmres. > > >>> w, v = gmres(A, b) > > > It works fine. > > Now i?m trying to improve the iterations, using > scipy.sparse.linalg.spilu preconditionner : > > > >>> invA = spilu(A) > > >>> w,v = gmres(A, b, M=invA) > > > File "", line 2, in gmres > File > "/usr/local/lib/python2.7/site-packages/scipy/sparse/linalg/isolve/iterative.py", > line 85, in non_reentrant > return func(*a, **kw) > File > "/usr/local/lib/python2.7/site-packages/scipy/sparse/linalg/isolve/iterative.py", > line 422, in gmres > A,M,x,b,postprocess = make_system(A,M,x0,b,xtype) > File > "/usr/local/lib/python2.7/site-packages/scipy/sparse/linalg/isolve/utils.py", > line 131, in make_system > M = aslinearoperator(M) > File > "/usr/local/lib/python2.7/site-packages/scipy/sparse/linalg/interface.py", > line 682, in aslinearoperator > raise TypeError('type not understood') > TypeError: type not understood > > Do i mistake somewhere ? > > Thanks for your answer. > > Pierre Puiseux > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > From jeffreback at gmail.com Sat Nov 21 08:43:52 2015 From: jeffreback at gmail.com (Jeff Reback) Date: Sat, 21 Nov 2015 08:43:52 -0500 Subject: [SciPy-User] ANN: pandas v0.17.1 Released Message-ID: Hi, We are proud to announce that *pandas* has become a sponsored project of the NUMFocus organization This will help ensure the success of development of *pandas* as a world-class open-source project. This is a minor bug-fix release from 0.17.0 and includes a large number of bug fixes along several new features, enhancements, and performance improvements. We recommend that all users upgrade to this version. This was a release of 5 weeks with 176 commits by 61 authors encompassing 84 issues and 128 pull-requests. *What is it:* *pandas* is a Python package providing fast, flexible, and expressive data structures designed to make working with ?relational? or ?labeled? data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. *Highlights*: - Support for Conditional HTML Formatting, see here - Releasing the GIL on the csv reader & other ops, see here - Fixed regression in DataFrame.drop_duplicates from 0.16.2, causing incorrect results on integer values see Issue 11376 See the Whatsnew for much more information and the full Documentation link. *How to get it:* Source tarballs, windows wheels, and macosx wheels are available on PyPI Installation via conda is: - conda install pandas windows wheels are courtesy of Christoph Gohlke and are built on Numpy 1.9 macosx wheels are courtesy of Matthew Brett *Issues:* Please report any issues on our issue tracker : Jeff *Thanks to all of the contributors* * - Aleksandr Drozd - Alex Chase - Anthonios Partheniou - BrenBarn - Brian J. McGuirk - Chris - Christian Berendt - Christian Perez - Cody Piersall - Data & Code Expert Experimenting with Code on Data - DrIrv - Evan Wright - Guillaume Gay - Hamed Saljooghinejad - Iblis Lin - Jake VanderPlas - Jan Schulz - Jean-Mathieu Deschenes - Jeff Reback - Jimmy Callin - Joris Van den Bossche - K.-Michael Aye - Ka Wo Chen - Lo?c S?guin-C - Luo Yicheng - Magnus J?ud - Manuel Leonhardt - Matthew Gilbert - Maximilian Roos - Michael - Nicholas Stahl - Nicolas Bonnotte - Pastafarianist - Petra Chong - Phil Schaf - Philipp A - Rob deCarvalho - Roman Khomenko - R?my L?one - Sebastian Bank - Thierry Moisan - Tom Augspurger - Tux1 - Varun - Wieland Hoffmann - Winterflower - Yoav Ram - Younggun Kim - Zeke - ajcr - azuranski - behzad nouri - cel4 - emilydolson - hironow - lexual - llllllllll - rockg - silentquasar - sinhrks - taeold * -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyrille.rossant at gmail.com Sun Nov 22 14:26:07 2015 From: cyrille.rossant at gmail.com (Cyrille Rossant) Date: Sun, 22 Nov 2015 20:26:07 +0100 Subject: [SciPy-User] ANN: New edition of the IPython minibook Message-ID: Hi all, The second edition of the book *Learning IPython for Interactive Computing and Data Visualization* has been released (Packt Publishing). The book now targets Python 3, IPython 4, and the Jupyter Notebook. There is a new introduction to the Python programming language for complete beginners, as well as new code examples covering pandas and NumPy for data analysis and numerical computing. There are also contents for more advanced users, like parallel computing with IPython and high-performance computing with Numba and Cython. All code examples are available on GitHub as Jupyter notebooks. You'll find more information here: * IPython Books website: * Book page on Packt's website: * GitHub repo: Cyrille From ellisonbg at gmail.com Sun Nov 22 15:39:11 2015 From: ellisonbg at gmail.com (Brian Granger) Date: Sun, 22 Nov 2015 12:39:11 -0800 Subject: [SciPy-User] [jupyter] ANN: New edition of the IPython minibook In-Reply-To: References: Message-ID: Fantastic and *many* congrats!!! On Sun, Nov 22, 2015 at 11:26 AM, Cyrille Rossant wrote: > Hi all, > > The second edition of the book *Learning IPython for Interactive > Computing and Data Visualization* has been released (Packt > Publishing). > > The book now targets Python 3, IPython 4, and the Jupyter Notebook. > There is a new introduction to the Python programming language for > complete beginners, as well as new code examples covering pandas and > NumPy for data analysis and numerical computing. There are also > contents for more advanced users, like parallel computing with IPython > and high-performance computing with Numba and Cython. > > All code examples are available on GitHub as Jupyter notebooks. > > You'll find more information here: > > * IPython Books website: > * Book page on Packt's website: > > * GitHub repo: > > Cyrille > > -- > You received this message because you are subscribed to the Google Groups "Project Jupyter" group. > To unsubscribe from this group and stop receiving emails from it, send an email to jupyter+unsubscribe at googlegroups.com. > To post to this group, send email to jupyter at googlegroups.com. > To view this discussion on the web visit https://groups.google.com/d/msgid/jupyter/CA%2B-1RQRZjWq1mWWMBts-Q0uosP8QhC7tT_vWRUsaKnUKYe7-7w%40mail.gmail.com. > For more options, visit https://groups.google.com/d/optout. -- Brian E. Granger Associate Professor of Physics and Data Science Cal Poly State University, San Luis Obispo @ellisonbg on Twitter and GitHub bgranger at calpoly.edu and ellisonbg at gmail.com From delaramq at gmail.com Sun Nov 22 16:21:04 2015 From: delaramq at gmail.com (Delaram Ghoreishi) Date: Sun, 22 Nov 2015 16:21:04 -0500 Subject: [SciPy-User] Finding Derivatives of a 2D Interpolation Using RectBivariateSpline.__call__ Message-ID: Hi. I need to find partial derivatives of 2d interpolation using RectBivariateSpline.__call__ from scipy. This is the code that I have so far: import numpy as np from scipy import interpolate xm = np.arange(-180.,182.,2) #rank-1 array length 181 ym = np.arange(-180.,182.,2) #rank-1 array length 181 zm = np.loadtxt('data.dat', dtype = np.double) #rank-2 array length (181x181) V = interpolate.RectBivariateSpline(xm, ym, zm, s=0) this code works fine when I need to know the value of the interpolated function at a given point, now how should I proceed from here to also find the derivatives at any specific point. Thanks a lot for the help. -- Delaram Ghoreishi Ph.D. Student Department of Physics University of Florida P.O. Box 118440 Gainesville, Florida 32611 -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Sun Nov 22 18:18:19 2015 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Sun, 22 Nov 2015 23:18:19 +0000 Subject: [SciPy-User] Finding Derivatives of a 2D Interpolation Using RectBivariateSpline.__call__ In-Reply-To: References: Message-ID: In [20]: x = np.arange(8) In [21]: y = np.arange(8) In [22]: z = x[:, None] + y[None, :] In [23]: spl = RectBivariateSpline(x, y, z) In [24]: spl(x, y, grid=False, dx=1) # Notice the dx argument. Out[24]: array([ 1., 1., 1., 1., 1., 1., 1., 1.]) HTH, Evgeni On Sun, Nov 22, 2015 at 9:21 PM, Delaram Ghoreishi wrote: > Hi. I need to find partial derivatives of 2d interpolation using > RectBivariateSpline.__call__ from scipy. This is the code that I have so > far: > > import numpy as np > from scipy import interpolate > xm = np.arange(-180.,182.,2) #rank-1 array length 181 > ym = np.arange(-180.,182.,2) #rank-1 array length 181 > zm = np.loadtxt('data.dat', dtype = np.double) #rank-2 array length > (181x181) > V = interpolate.RectBivariateSpline(xm, ym, zm, s=0) > > this code works fine when I need to know the value of the interpolated > function at a given point, now how should I proceed from here to also find > the derivatives at any specific point. > > Thanks a lot for the help. > > > -- > Delaram Ghoreishi > Ph.D. Student > Department of Physics > University of Florida > P.O. Box 118440 > Gainesville, Florida 32611 > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > From delaramq at gmail.com Sun Nov 22 18:51:14 2015 From: delaramq at gmail.com (Delaram Ghoreishi) Date: Sun, 22 Nov 2015 18:51:14 -0500 Subject: [SciPy-User] Finding Derivatives of a 2D Interpolation Using RectBivariateSpline.__call__ In-Reply-To: References: Message-ID: Hello Evgeni, I tried what you told me to do (copy and pasted your code basically) but I got this error: __call__() got an unexpected keyword argument 'grid' I have no idea what's wrong here. I appreciate your help a lot. On Sun, Nov 22, 2015 at 6:18 PM, Evgeni Burovski wrote: > In [20]: x = np.arange(8) > > In [21]: y = np.arange(8) > > In [22]: z = x[:, None] + y[None, :] > > In [23]: spl = RectBivariateSpline(x, y, z) > > In [24]: spl(x, y, grid=False, dx=1) # Notice the dx argument. > Out[24]: array([ 1., 1., 1., 1., 1., 1., 1., 1.]) > > HTH, > > Evgeni > > > On Sun, Nov 22, 2015 at 9:21 PM, Delaram Ghoreishi > wrote: > > Hi. I need to find partial derivatives of 2d interpolation using > > RectBivariateSpline.__call__ from scipy. This is the code that I have so > > far: > > > > import numpy as np > > from scipy import interpolate > > xm = np.arange(-180.,182.,2) #rank-1 array length 181 > > ym = np.arange(-180.,182.,2) #rank-1 array length 181 > > zm = np.loadtxt('data.dat', dtype = np.double) #rank-2 array length > > (181x181) > > V = interpolate.RectBivariateSpline(xm, ym, zm, s=0) > > > > this code works fine when I need to know the value of the interpolated > > function at a given point, now how should I proceed from here to also > find > > the derivatives at any specific point. > > > > Thanks a lot for the help. > > > > > > -- > > Delaram Ghoreishi > > Ph.D. Student > > Department of Physics > > University of Florida > > P.O. Box 118440 > > Gainesville, Florida 32611 > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > https://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > -- Delaram Ghoreishi Ph.D. Student Department of Physics University of Florida P.O. Box 118440 Gainesville, Florida 32611 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Mon Nov 23 01:02:00 2015 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 22 Nov 2015 22:02:00 -0800 Subject: [SciPy-User] [IPython-dev] ANN: New edition of the IPython minibook In-Reply-To: References: Message-ID: Awesome, congratulations!! Please don't forget to submit a PR to update the links/images on the site :) On Sun, Nov 22, 2015 at 11:26 AM, Cyrille Rossant wrote: > Hi all, > > The second edition of the book *Learning IPython for Interactive > Computing and Data Visualization* has been released (Packt > Publishing). > > The book now targets Python 3, IPython 4, and the Jupyter Notebook. > There is a new introduction to the Python programming language for > complete beginners, as well as new code examples covering pandas and > NumPy for data analysis and numerical computing. There are also > contents for more advanced users, like parallel computing with IPython > and high-performance computing with Numba and Cython. > > All code examples are available on GitHub as Jupyter notebooks. > > You'll find more information here: > > * IPython Books website: > * Book page on Packt's website: > < > https://www.packtpub.com/big-data-and-business-intelligence/learning-ipython-interactive-computing-and-data-visualization-sec > > > * GitHub repo: > > Cyrille > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > https://mail.scipy.org/mailman/listinfo/ipython-dev > -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Mon Nov 23 02:02:40 2015 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 22 Nov 2015 23:02:40 -0800 Subject: [SciPy-User] [IPython-dev] ANN: New edition of the IPython minibook In-Reply-To: References: Message-ID: Actually, just saw there was already a PR open, just merged it, thanks! On Sun, Nov 22, 2015 at 10:02 PM, Fernando Perez wrote: > Awesome, congratulations!! > > Please don't forget to submit a PR to update the links/images on the site > :) > > On Sun, Nov 22, 2015 at 11:26 AM, Cyrille Rossant < > cyrille.rossant at gmail.com> wrote: > >> Hi all, >> >> The second edition of the book *Learning IPython for Interactive >> Computing and Data Visualization* has been released (Packt >> Publishing). >> >> The book now targets Python 3, IPython 4, and the Jupyter Notebook. >> There is a new introduction to the Python programming language for >> complete beginners, as well as new code examples covering pandas and >> NumPy for data analysis and numerical computing. There are also >> contents for more advanced users, like parallel computing with IPython >> and high-performance computing with Numba and Cython. >> >> All code examples are available on GitHub as Jupyter notebooks. >> >> You'll find more information here: >> >> * IPython Books website: >> * Book page on Packt's website: >> < >> https://www.packtpub.com/big-data-and-business-intelligence/learning-ipython-interactive-computing-and-data-visualization-sec >> > >> * GitHub repo: >> >> Cyrille >> _______________________________________________ >> IPython-dev mailing list >> IPython-dev at scipy.org >> https://mail.scipy.org/mailman/listinfo/ipython-dev >> > > > > -- > Fernando Perez (@fperez_org; http://fperez.org) > fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) > fernando.perez-at-berkeley: contact me here for any direct mail > -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From alf.rodrigo at gmail.com Wed Nov 25 13:43:55 2015 From: alf.rodrigo at gmail.com (=?UTF-8?Q?Rodrigo_Ara=C3=BAjo?=) Date: Wed, 25 Nov 2015 15:43:55 -0300 Subject: [SciPy-User] Using ctypes with scipy.integrate Message-ID: Hello everyone, I'm using Python and SciPy to develop a reservoir flow simulator for my master thesis and part of the solution includes solving some integrals in order to build a coefficient matrix. This solution is based on using the solution for a source point acting at an observation point and integrating its value considering that the source is a plane instead of a point, as shown below: Point solution: [image: Inline image 3] Plane solution: [image: Inline image 1] where [image: Inline image 2] I'm using scipy.integrate.nquad to integrate based on [1] using the following python function, which receives multiple arguments. def deriv_green_2d(self, x, y, sx, sy, sz, ox, oy, oz, ori, sqrt_u, dn): """Calculates Green's function derivative value on two dimensions. Keyword arguments: x,y -- integration variables sx, sy, sz -- source point coordinates ox, oy, oz -- observation point coordinates ori -- source orientation (wether it's a xy, xz or yz plane) sqrt_u -- square root of Laplace's variable value dn -- normal distance between the points """ # linear distance if ori == 0: r = sqrt((sx-ox)**2 + (sy-oy+x)**2 + (sz-oz+y)**2) elif ori == 1: r = sqrt((sx-ox+x)**2 + (sy-oy)**2 + (sz-oz+y)**2) else: r = sqrt((sx-ox+x)**2 + (sy-oy+y)**2 + (sz-oz)**2) return dn/r**2 * (sqrt_u + 1/r) * exp(-r*sqrt_u) As this integral gets solved multiple times for each timestep, it slows down the simulator performance. In order to improve the speed, I'm trying to use ctypes as shown in [2], but in the example provided, the only parameters for the c function are the integration variables and, as show above, I need multiple different arguments besides the integration variables (sx, sy, sz, ox, oy, oz, ori, sqrt_u and dn). Does anyone know if this possible on SciPy v0.16.1? Best regards and thanks in advance, Rodrigo Ara?jo ? [1]: http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html#general-multiple-integration-dblquad-tplquad-nquad [2]: http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html#faster-integration-using-ctypes -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rd.png Type: image/png Size: 3605 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2D_d_integral.png Type: image/png Size: 7489 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Point.png Type: image/png Size: 5065 bytes Desc: not available URL: From kevin.gullikson at gmail.com Wed Nov 25 13:51:01 2015 From: kevin.gullikson at gmail.com (Kevin Gullikson) Date: Wed, 25 Nov 2015 12:51:01 -0600 Subject: [SciPy-User] Using ctypes with scipy.integrate In-Reply-To: References: Message-ID: Rodrigo, Yeah, that is definitely possible. You just have to pass the additional arguments using the 'args' keyword in your call to integrate.nquad. Here is a snippet of the C code that I use with integrate.quad for one of my problems.: double q_integrand_logisticQ_malmquist(int n, double args[n]){ //unpack arguments double q = args[0]; double gamma = args[1]; double alpha = args[2 ]; double beta = args[3]; Kevin Gullikson PhD Candidate University of Texas Astronomy RLM 15.310E On Wed, Nov 25, 2015 at 12:43 PM, Rodrigo Ara?jo wrote: > Hello everyone, > > I'm using Python and SciPy to develop a reservoir flow simulator for my > master thesis and part of the solution includes solving some integrals in > order to build a coefficient matrix. > > This solution is based on using the solution for a source point acting at > an observation point and integrating its value considering that the source > is a plane instead of a point, as shown below: > > Point solution: > [image: Inline image 3] > > Plane solution: > [image: Inline image 1] > where > [image: Inline image 2] > > I'm using scipy.integrate.nquad to integrate based on [1] using the > following python function, which receives multiple arguments. > > def deriv_green_2d(self, x, y, sx, sy, sz, ox, oy, oz, ori, sqrt_u, dn): > """Calculates Green's function derivative value on two dimensions. > > Keyword arguments: > x,y -- integration variables > sx, sy, sz -- source point coordinates > ox, oy, oz -- observation point coordinates > ori -- source orientation (wether it's a xy, xz or yz plane) > sqrt_u -- square root of Laplace's variable value > dn -- normal distance between the points > """ > # linear distance > if ori == 0: > r = sqrt((sx-ox)**2 + (sy-oy+x)**2 + (sz-oz+y)**2) > elif ori == 1: > r = sqrt((sx-ox+x)**2 + (sy-oy)**2 + (sz-oz+y)**2) > else: > r = sqrt((sx-ox+x)**2 + (sy-oy+y)**2 + (sz-oz)**2) > > return dn/r**2 * (sqrt_u + 1/r) * exp(-r*sqrt_u) > > > As this integral gets solved multiple times for each timestep, it slows > down the simulator performance. > > In order to improve the speed, I'm trying to use ctypes as shown in [2], > but in the example provided, the only parameters for the c function are the > integration variables and, as show above, I need multiple different > arguments besides the integration variables (sx, sy, sz, ox, oy, oz, ori, > sqrt_u and dn). > > Does anyone know if this possible on SciPy v0.16.1? > > Best regards and thanks in advance, > > Rodrigo Ara?jo > ? > [1]: > http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html#general-multiple-integration-dblquad-tplquad-nquad > [2]: > http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html#faster-integration-using-ctypes > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Point.png Type: image/png Size: 5065 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2D_d_integral.png Type: image/png Size: 7489 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rd.png Type: image/png Size: 3605 bytes Desc: not available URL: From alf.rodrigo at gmail.com Wed Nov 25 14:09:00 2015 From: alf.rodrigo at gmail.com (=?UTF-8?Q?Rodrigo_Ara=C3=BAjo?=) Date: Wed, 25 Nov 2015 16:09:00 -0300 Subject: [SciPy-User] Using ctypes with scipy.integrate In-Reply-To: References: Message-ID: Hi Kevin, Thanks a lot, problem solved. Best regards, Rodrigo Ara?jo On Wed, Nov 25, 2015 at 3:51 PM, Kevin Gullikson wrote: > Rodrigo, > > Yeah, that is definitely possible. You just have to pass the additional > arguments using the 'args' keyword in your call to integrate.nquad. Here is > a snippet of the C code that I use with integrate.quad for one of my > problems.: > > double q_integrand_logisticQ_malmquist(int n, double args[n]){ //unpack > arguments double q = args[0]; double gamma = args[1]; double alpha = args[ > 2]; double beta = args[3]; > > > Kevin Gullikson > PhD Candidate > University of Texas Astronomy > RLM 15.310E > > On Wed, Nov 25, 2015 at 12:43 PM, Rodrigo Ara?jo > wrote: > >> Hello everyone, >> >> I'm using Python and SciPy to develop a reservoir flow simulator for my >> master thesis and part of the solution includes solving some integrals in >> order to build a coefficient matrix. >> >> This solution is based on using the solution for a source point acting at >> an observation point and integrating its value considering that the source >> is a plane instead of a point, as shown below: >> >> Point solution: >> [image: Inline image 3] >> >> Plane solution: >> [image: Inline image 1] >> where >> [image: Inline image 2] >> >> I'm using scipy.integrate.nquad to integrate based on [1] using the >> following python function, which receives multiple arguments. >> >> def deriv_green_2d(self, x, y, sx, sy, sz, ox, oy, oz, ori, sqrt_u, dn): >> """Calculates Green's function derivative value on two dimensions. >> >> Keyword arguments: >> x,y -- integration variables >> sx, sy, sz -- source point coordinates >> ox, oy, oz -- observation point coordinates >> ori -- source orientation (wether it's a xy, xz or yz plane) >> sqrt_u -- square root of Laplace's variable value >> dn -- normal distance between the points >> """ >> # linear distance >> if ori == 0: >> r = sqrt((sx-ox)**2 + (sy-oy+x)**2 + (sz-oz+y)**2) >> elif ori == 1: >> r = sqrt((sx-ox+x)**2 + (sy-oy)**2 + (sz-oz+y)**2) >> else: >> r = sqrt((sx-ox+x)**2 + (sy-oy+y)**2 + (sz-oz)**2) >> >> return dn/r**2 * (sqrt_u + 1/r) * exp(-r*sqrt_u) >> >> >> As this integral gets solved multiple times for each timestep, it slows >> down the simulator performance. >> >> In order to improve the speed, I'm trying to use ctypes as shown in [2], >> but in the example provided, the only parameters for the c function are the >> integration variables and, as show above, I need multiple different >> arguments besides the integration variables (sx, sy, sz, ox, oy, oz, ori, >> sqrt_u and dn). >> >> Does anyone know if this possible on SciPy v0.16.1? >> >> Best regards and thanks in advance, >> >> Rodrigo Ara?jo >> ? >> [1]: >> http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html#general-multiple-integration-dblquad-tplquad-nquad >> [2]: >> http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html#faster-integration-using-ctypes >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> https://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rd.png Type: image/png Size: 3605 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Point.png Type: image/png Size: 5065 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2D_d_integral.png Type: image/png Size: 7489 bytes Desc: not available URL: