[SciPy-user] advice on stochastic(?) optimisation

Dominique Orban dominique.orban at gmail.com
Thu Aug 28 13:45:43 EDT 2008


On Thu, Aug 28, 2008 at 11:23 AM, bryan cole <bryan.cole at teraview.com> wrote:

> I'll looking for a bit of guidance as to what sort of algorithm is most
> appropriate/efficient for finding the local maximum of a function (in 2
> dimensions), where each function evaluation is 1) noisy and 2)
> expensive/slow to evaluate.

A few points that might influence the choice of a method are:

- Are there any constraints?
- Can you compute the first derivatives of your objective function
(i.e., its gradient) and of your constraints?
- How about second derivatives?
- If not, do those functions have a known structure? Do you have
access to a computer code that evaluates them or are they essentially
a "black box"?

> I'd welcome any suggestions for where best to start investigating this
> (text books, references, web-sites or existing optimisation libraries).
> I've no background in this field at all.

For noisy optimization you might want to start with Tim Kelley's book
and his method (called DIRECT). See
http://www4.ncsu.edu/~ctk/iffco.html
There is Fortran code that could be wrapped up with f2py and
(apparently more up to date) Matlab code that could be converted to
Python.

For his book, see http://www.ec-securehost.com/SIAM/FR18.html

There is also a class of very successful methods called mesh-adaptive
direct search. There is a C++ code at http://www.gerad.ca/NOMAD/ for
problems which are essentially made up of black boxes.

The advantage of such methods is that, despite the difficulty of the
problems, they guarantee certain convergence properties. That's not to
say that they always work, although they often do, but rather that
when they don't you can usually figure out why.

Dominique



More information about the SciPy-User mailing list