[SciPy-Dev] Some preliminary black-box COCO minimization benchmark results

Petr Baudis pasky at ucw.cz
Tue Mar 4 22:01:53 EST 2014


  Hi!

On Tue, Mar 04, 2014 at 11:31:19PM +0100, Andrea Gavana wrote:
> >>   Another interesting benchmark might be the COCO benchmark of BBOB
> >> workshops which is often used in academia for global optimization
> >> performance comparisons:
> >>
> >>         http://coco.gforge.inria.fr/doku.php
> >>
> >> Though it focuses on black-box optimization.  I plan to publish a
> >> performance graph for all SciPy's optimizers wrapped in basinhopping
> >> as benchmarked within COCO after the end of March (a month of deadlines
> >> for me), if noone beats me to it.
..snip..
> >
> > I like your approach, hopefully the published results for the
> > benchmarks will include the number of function evaluations as the most
> > prominent parameter instead of the usual, math-standard (and completely
> > useless) CPU time/elapsed time/runtime for Alan algorithm.
> >
> 
> Oh I hate this autocorrect thing... "Alan" should read "an".

  I think "completely useless" is a bit strong, but I agree that
function evaluations is the most important metric. All COCO benchmarks
use number of function evaluations as the primary metric.

  I have put a preliminary PDF with a few graphs at

	http://pasky.or.cz/dev/scipy/templateBBOBmany-LscipyCMA.pdf

benchmarking expected time vs. dimensionality for each benchmark
function, and expected optimization success vs. time for 5D and 20D
benchmark function families.  Benchmark functions are available at

	http://coco.lri.fr/downloads/download13.09/bbobdocfunctionsdef.pdf

  The benchmarked minimizers are stock scipy minimizers wrapped in
optimize.basinhopping, using completely default settings, treating all
benchmarked functions as blackboxes (so no jacobians etc.). COBYLA has
been excluded as it doesn't implement callbacks (yet?), which interferes
with parts of my framework. CMA from https://www.lri.fr/~hansen/cma.py
is included for "outside perspective" comparison.

  Basically, it seems that for very nice functions Powell is best
(Nelder-Mead is great in low dimensions); in all other cases, CMA beats
all scipy optimizers in the COCO benchmark but may (unsurprisingly) have
a slow start - from scipy, SLSQP is great esp. with limited budget and
scales excellently even into high dimensionality; you won't do too badly
going with the popular BFGS either.  Each minimizer is best at least in
some COCO benchmark scenario, except TNC whose performance is uniformly
bad with black-box functions.

  It is preliminary as the computation budget 10^4 I used for this is
too small; ERT lines after the X signs are not very meaningful.

> > I have run my benchmark against CMA-ES as well, you can see the comparison
> > results here:
> >
> > http://infinity77.net/global_optimization/multidimensional.html
> >
> > The current Python wrapper for CMA-ES does not work for Univariate
> > problems.

  Indeed, I really look forward to trying out Ampgo, I will do that as
soon as time allows!  Regarding the benchmark, what local minimizer does
Basinhopping wrap here?

				Petr "Pasky" Baudis



More information about the SciPy-Dev mailing list