[SciPy-User] optimize.minimize - help me understand arrays as variables

KURT PETERS peterskurt at msn.com
Thu Jan 15 10:00:01 EST 2015


See below
> Date: Thu, 15 Jan 2015 11:01:56 +1100
> From: Geordie McBain <gdmcbain at freeshell.org>
> Subject: Re: [SciPy-User] optimize.minimize - help me understand
> 	arrays as variables (Andrew Nelson)
> To: SciPy Users List <scipy-user at scipy.org>
> Message-ID:
> 	<CAB0gBH38Zx4Sp=zxxPVe_RjTqJ+ifHTQikW3WwLRYt+4o1Y1=Q at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
> 
> 2015-01-13 10:26 GMT+11:00 KURT PETERS <peterskurt at msn.com>:
> >  > Date: Sun, 11 Jan 2015 17:55:32 -0700
> >> From: KURT PETERS <peterskurt at msn.com>
> >> Subject: [SciPy-User] optimize.minimize - help me understand arrays as
> >> variables
> >> To: "scipy-user at scipy.org" <scipy-user at scipy.org>
> >> Message-ID: <BLU172-W37473350444550BD9CF90DD8430 at phx.gbl>
> >> Content-Type: text/plain; charset="iso-8859-1"
> >>
> >> I'm trying to use scipy.optimize.minimize.
> >> I've tried multiple "multivariate" methods that don't seem to actually
> >> take multivariate data and derivatives. Can someone tell me how I can make
> >> the multivariate part of the solver actually work?
> >>
> >> Here's an example:
> >> My main function the following (typical length for N is 3):
> >>
> >> input guess is a x0=np.array([1,2,3])
> >> the optimization function returns:
> >> def calc_f3d(...):
> >> f3d = np.ones((np.max([3,N]),1)
> >> .... do some assignments to f3d[row,0] ....
> >> return np.linalg.norm(f3d) # numpy.array that's 3x1
> >>
> >> The jacobian returns a Nx3 matrix:
> >> def jacob3d(...):
> >> df = np.ones((np.max([3,N]),3))
> >> ... do some assignments to df[row,col]
> >> return df # note numpy.array that's 3x3
> 
> Hello.  I think that this might be the problem here: the jac should
> have the same shape as x, i.e. (N,), not (N, 3); the components of the
> jac are the partial derivatives of fun with respect to the
> corresponding components of x.  Think of it as the gradient of the
> objective.
> 
> Here's a simple shape=(2,) example, taken from D. M. Greig's
> Optimisation (1980, London: Longman).  The exact minimum is 0 at [1,
> 1].
> 
> def f(x):                       # Greig (1980, p. 48)
>     return (x[1] - x[0]**2)**2 + (1 - x[0])**2
> 
> def g(x):                       # ibid
>     return np.array([-4*x[0]*(x[1] - x[0]**2) - 2 + 2*x[0],
>                      2 * (x[1] - x[0]**2)])
> 
> x = np.zeros(2)
> print('Without Jacobian: ', minimize(f, x))
> print('\nWith:', minimize(f, x, jac=g))

Geordie,
I don't think that's the case.  Everything I've ever learned about the Jacobian is that it's the partials of each function with respect to each variable... so two equations with two unknowns, would yield a 2x2.  Here's a wiki explaining what I mean:
http://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant
 
 If what you're saying is right, then the people that developed the function don't know what a Jacobian is.  I would find that hard to believe.
 
Kurt
 
 
 
 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.scipy.org/pipermail/scipy-user/attachments/20150115/9a06ee97/attachment.html>


More information about the SciPy-User mailing list