[SciPy-dev] scipy.optimize

Robert Cimrman cimrman3 at ntc.zcu.cz
Fri Apr 28 05:12:12 EDT 2006


Ed Schofield wrote:

> Hi Robert,
> 
> Yes, I think these are great suggestions.  If you look at maxentropy.py,
> you'll see that I've written various wrappers to deal with exactly these
> issues.  If we had a common interface to the parameters of the optimize
> functions and a callback facility, I could simplify this code.

> I think a generic callback facility would be useful either for logging
> or a variety of other tasks, such as reliability tests in the case of
> stochastic optimization.  We'd need to think about whether to call back
> each iteration or each function/gradient evaluation.  I think each
> iteration would be preferable, but seem to recall that it's not trivial
> to do with, for example, L-BFGS-B.

Yes, fmin_l_bfgs_b uses a wrapped fortran module, so I can think only 
about rewriting its high-level logic (the main iteration loop?) in 
Python, using directly the fortran subroutines. But all functions in 
optimize.py could use the common interface without pain.

The two main possibilities are:
1) call back each iteration (as I do in my fmin_sd) only
2) call back via wrap_function macro, so that function/gradient calls 
e.g. in line search functions are not missed - the callback could have 
one arg saying from where it was called, so that in postporcessing you 
could plot e.g. just the data from main loop iterations.

Now as I have written them down, I would vote for 2) in some form.

> We'd also need to consider how to specify stopping criteria uniformly;
> currently the functions use several different definitions of tolerance,
> such as the mean or the norm of the gradient vector.  It would be great
> to unify these.

I will send you off-list how I have it for now.

cheers,
r.




More information about the SciPy-Dev mailing list