[SciPy-User] Optimisation of a rough function

Matthieu Brucher matthieu.brucher at gmail.com
Sat Apr 2 10:58:35 EDT 2011


This seems to be indeed a global optimization issue. A local optimization
(like Nelden-Mead or conjugated-gradient) is only successful if the function
is convex, or if you start near the global optimium and if in this area, the
function is convex.

First, if you have access to the gradient, you should use it if its
computation is not too slow.

Then, if your function is not convex (which seems to be the case), you may
start with simulated annealing or genetic algorithms to get inside a locally
convex region.
Next step would be retaining several sets of parameters and try a local
optimization method on each set. I have several gradient-based optimizers
available in scikits.optimization (I have some tutorials on my blog:
http://matt.eifelle.com/category/python/generic-optimizers/) that you may
try, but you can also use fmin by specifying the gradient.

Matthieu

2011/4/2 Paul Anton Letnes <paul.anton.letnes at gmail.com>

>
> On 2. apr. 2011, at 08.04, David Baddeley wrote:
>
> > Hi all,
> >
> > there are enough optimisation gurus here that hopefully someone might
> have some
> > ideas. I'm trying to optimise a goal function that has a well defined
> minimum,
> > but also a certain amount of roughness (see attached figure for an
> example).
> > Most of the time (80%) optimize.fmin seems to work, but it sometimes gets
> stuck
> > in the roughness. optimise.anneal works, but needs many more function
> > evaluations, making it a bit impractical (the goal is to be
> semi-interactive &
> > the fmin implementation already struggles). The goal function is quite
> complex
> > and already written in c. I'm really after some form of solver which will
> skip
> > over all the little bumps, but still benefits from using the gradient
> > information. Should probably also mention that it will usually have ~ 10
> > parameters, rather than the 2 parameter case shown, but that the same
> features
> > should be present. Would welcome any ideas.
> >
> > thanks,
> > David
>
> I am no guru, but I have done some optimization work.
>
> 1) Genetic algorithms are good at not getting stuck in local minima.
> 2) Same can be said for the simulated annealing group of algorithms
> (simulated tempering et al.), at least if you get your parameters
> (temperature etc) right.
> 3) A simpler way can be what some people call "bad derivatives". Simply
> put, smoothen your objective function, or take long steps in the beginning.
> As you get closer to convergence, adjust your smoothing or step length. I do
> not remember any references off-hand, but perhaps a sketch of the algorithm
> is found in Numerical Recipes? (Disclaimer: I have never used this
> particular method myself.)
> 4) Simplest yet: since computation is quite cheap for such a simple
> (correct me if I am wrong here) objective function, simply perform N
> optimization runs with randomized starting points. Pick the one which
> converges to the smallest/biggest value.
>
> Good luck,
> Paul.
>
> _______________________________________________
> SciPy-User mailing list
> SciPy-User at scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>



-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.scipy.org/pipermail/scipy-user/attachments/20110402/38d196a4/attachment.html>


More information about the SciPy-User mailing list