[SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user)

Michael McNeil Forbes mforbes at physics.ubc.ca
Mon Apr 16 16:13:22 EDT 2007


On 16 Apr 2007, at 12:27 PM, Matthieu Brucher wrote:

> Basically, and approximated Jacobian is used to determine the step
> direction and/or size (depending on the "step" module etc.)
>
> The key to the Broyden approach is that the information about F(x+dx)
> is used to update the approximate Jacobian (think multidimensional
> secant method) in a clever way without any additional function
> evaluations (there is not a unique way to do this and some choices
> work better than others).
>
> Thus, think of Broyden methods as a Quasi-Newton methods but with a
> cheap and very approximate Jacobian (hence, one usually uses a robust
> line search method to make sure that one is always descending).
>
>
> I read some doc, it seems Broyden is a class of different steps, am  
> I wrong ? and it tries to approximate the Hessian of the function.

I am thinking of root finding G(x) = 0 right now rather than  
optimization (I have not thought about the optimization problem  
yet).  The way the Broyden root finder works is that you start with  
an approximate Jacobian J0 (you could start with the identity for  
example).

So, start with an approximate J0 at position x0.

G0 = G(x0)
dx = -inv(J0)*G0 # This is a Quasi-Newton step.  Use your favourite  
step method here...
x1 = x0 + dx
G1 = G(x1)

Now the Broyden part comes in.  It computes a new approximate  
Jacobian J1 at position x1 using J0, dG and dX such that

J1*dx = dG

This is the secant condition.  There are many ways to do this in  
multiple dimensions and the various Broyden methods choose one of  
these.  The most common is the BFGS choice, but there are other  
choices with different convergence properties.

Now start over with J0 = J1 and x0 = x1 and repeat until convergence  
is met.

Michael.

P.S. More comments to follow.



More information about the SciPy-Dev mailing list