[SciPy-Dev] Global Optimization Benchmarks

Andrea Gavana andrea.gavana at gmail.com
Fri Jan 8 04:20:27 EST 2021


Dear SciPy Developers & Users,

    long time no see :-) . I thought to start 2021 with a bit of a bang, to
try and forget how bad 2020 has been... So I am happy to present you with a
revamped version of the Global Optimization Benchmarks from my previous
exercise in 2013.

This new set of benchmarks pretty much superseeds - and greatly expands -
the previous analysis that you can find at this location:
http://infinity77.net/global_optimization/ .

The approach I have taken this time is to select as many benchmark test
suites as possible: most of them are characterized by test function
*generators*, from which we can actually create almost an unlimited number
of unique test problems. Biggest news are:


   1. This whole exercise is made up of *6,825* test problems divided
   across *16* different test suites: most of these problems are of low
   dimensionality (2 to 6 variables) with a few benchmarks extending to 9+
   variables. With all the sensitivities performed during this exercise on
   those benchmarks, the overall grand total number of functions evaluations
   stands at *3,859,786,025* - close to *4 billion*. Not bad.
   2. A couple of "new" optimization algorithms I have ported to Python:


   - MCS: Multilevel Coordinate Search
      <https://www.mat.univie.ac.at/~neum/software/mcs/>, it’s my
      translation to Python of the original Matlab code from A. Neumaier and W.
      Huyer (giving then for free also GLS and MINQ) I have added a few, minor
      improvements compared to the original implementation.
      - BiteOpt: BITmask Evolution OPTimization
      <https://github.com/avaneev/biteopt> , I have converted the C++ code
      into Python and added a few, minor modifications.


Enough chatting for now. The 13 tested algorithms are described here:

http://infinity77.net/go_2021/

High level description & results of the 16 benchmarks:

http://infinity77.net/go_2021/thebenchmarks.html

Each benchmark test suite has its own dedicated page, with more detailed
results and sensitivities.

List of tested algorithms:

   1.

   *AMPGO*: Adaptive Memory Programming for Global Optimization: this is my
   Python implementation of the algorithm described here:


   http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20for%20Constrained%20Global%20Opt%20w%20Lasdon%20et%20al%20.pdf

   I have added a few improvements here and there based on my Master Thesis
   work on the standard Tunnelling Algorithm of Levy, Montalvo and Gomez.
   After AMPGO was integrated in lmfit <https://github.com/lmfit/lmfit-py>,
   I have improved it even more - in my opinion.
   2.

   *BasinHopping*: Basin hopping is a random algorithm which attempts to
   find the global minimum of a smooth scalar function of one or more
   variables. The algorithm was originally described by David Wales:

   http://www-wales.ch.cam.ac.uk/

   BasinHopping is now part of the standard SciPy distribution.
   3.

   *BiteOpt*: BITmask Evolution OPTimization, based on the algorithm
   presented in this GitHub link:

   https://github.com/avaneev/biteopt

   I have converted the C++ code into Python and added a few, minor
   modifications.
   4.

   *CMA-ES*: Covariance Matrix Adaptation Evolution Strategy, based on the
   following algorithm:

   http://www.lri.fr/~hansen/cmaesintro.html

   http://www.lri.fr/~hansen/cmaes_inmatlab.html#python (Python code for
   the algorithm)
   5.

   *CRS2*: Controlled Random Search with Local Mutation, as implemented in
   the NLOpt <http://ab-initio.mit.edu/wiki/index.php/NLopt> package:


   http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#Controlled_Random_Search_.28CRS.29_with_local_mutation
   6.

   *DE*: Differential Evolution, described in the following page:

   http://www1.icsi.berkeley.edu/~storn/code.html

   DE is now part of the standard SciPy distribution, and I have taken the
   implementation as it stands in SciPy:


   https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#scipy.optimize.differential_evolution
   7.

   *DIRECT*: the DIviding RECTangles procedure, described in:


   https://www.tol-project.org/export/2776/tolp/OfficialTolArchiveNetwork/NonLinGloOpt/doc/DIRECT_Lipschitzian%20optimization%20without%20the%20lipschitz%20constant.pdf


   http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms#DIRECT_and_DIRECT-L
(Python
   code for the algorithm)
   8.

   *DualAnnealing*: the Dual Annealing algorithm, taken directly from the
   SciPy implementation:


   https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_annealing.html#scipy.optimize.dual_annealing
   9.

   *LeapFrog*: the Leap Frog procedure, which I have been recommended for
   use, taken from:

   https://github.com/flythereddflagg/lpfgopt
   10.

   *MCS*: Multilevel Coordinate Search, it’s my translation to Python of
   the original Matlab code from A. Neumaier and W. Huyer (giving then for
   free also GLS <https://www.mat.univie.ac.at/~neum/software/ls/> and MINQ
   <https://www.mat.univie.ac.at/~neum/software/minq/>):

   https://www.mat.univie.ac.at/~neum/software/mcs/

   I have added a few, minor improvements compared to the original
   implementation. See the MCS
<http://infinity77.net/go_2021/mcs.html#mcs> section
   for a quick and dirty comparison between the Matlab code and my Python
   conversion.
   11.

   *PSWARM*: Particle Swarm optimization algorithm, it has been described
   in many online papers. I have used a compiled version of the C source code
   from:

   http://www.norg.uminho.pt/aivaz/pswarm/
   12.

   *SCE*: Shuffled Complex Evolution, described in:

   Duan, Q., S. Sorooshian, and V. Gupta, Effective and efficient global
   optimization for conceptual rainfall-runoff models, Water Resour. Res., 28,
   1015-1031, 1992.

   The version I used was graciously made available by Matthias Cuntz via a
   personal e-mail.
   13.

   *SHGO*: Simplicial Homology Global Optimization, taken directly from the
   SciPy implementation:


   https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.shgo.html#scipy.optimize.shgo


List of benchmark test suites:

   1.

   SciPy Extended
   <http://infinity77.net/go_2021/scipy_extended.html#scipy-extended>: 235
   multivariate problems (where the number of independent variables ranges
   from 2 to 17), again with multiple local/global minima.

   I have added about 40 new functions to the standard SciPy benchmarks
   <https://github.com/scipy/scipy/tree/master/benchmarks/benchmarks/go_benchmark_functions>
and
   fixed a few bugs in the existing benchmark models in the SciPy repository.
   2.

   GKLS <http://infinity77.net/go_2021/gkls.html#gkls>: 1,500 test
   functions, with dimensionality varying from 2 to 6, generated with the
   super famous GKLS Test Functions Generator
   <https://arxiv.org/abs/1103.2695>. I have taken the original C code
   (available at http://netlib.org/toms/) and converted it to Python.
   3.

   GlobOpt <http://infinity77.net/go_2021/globopt.html#globopt>: 288 tough
   problems, with dimensionality varying from 2 to 5, created with another
   test function generator which I arbitrarily named “GlobOpt”:
   https://www.researchgate.net/publication/225566516_A_new_class_of_test_functions_for_global_optimization
.
   The original code is in C++ and I have bridged it to Python using Cython.

   *Many thanks* go to Professor Marco Locatelli for providing an updated
   copy of the C++ source code.
   4.

   MMTFG <http://infinity77.net/go_2021/mmtfg.html#mmtfg>: sort-of an
   acronym for “Multi-Modal Test Function with multiple Global minima”, this
   test suite implements the work of Jani Ronkkonen:
   https://www.researchgate.net/publication/220265526_A_Generator_for_Multimodal_Test_Functions_with_Multiple_Global_Optima
.
   It contains 981 test problems with dimensionality varying from 2 to 4. The
   original code is in C and I have bridge it to Python using Cython.
   5.

   GOTPY <http://infinity77.net/go_2021/gotpy.html#gotpy>: a generator of
   benchmark functions using the Bocharov-Feldbaum “Method-Min”, containing
   400 test problems with dimensionality varying from 2 to 5. I have taken the
   Python implementation from https://github.com/redb0/gotpy and improved
   it in terms of runtime.

   Original paper from
   http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=at&paperid=11985&option_lang=eng
    .
   6.

   Huygens <http://infinity77.net/go_2021/huygens.html#huygens>: this
   benchmark suite is very different from the rest, as it uses a “fractal”
   approach to generate test functions. It is based on the work of Cara
   MacNish on Fractal Functions
   <https://www.cs.bham.ac.uk/research/projects/ecb/displayDataset.php?id=150>.
   The original code is in Java, and at the beginning I just converted it to
   Python: given it was slow as a turtle, I have re-implemented it in Fortran
   and wrapped it using f2py <http://www.scipy.org/F2py>, then generating
   600 2-dimensional test problems out of it.
   7.

   LGMVG <http://infinity77.net/go_2021/lgmvg.html#lgmvg>: not sure about
   the meaning of the acronym, but the implementation follows the “Max-Set of
   Gaussians Landscape Generator” described in
   http://boyuan.global-optimization.com/LGMVG/index.htm . Source code is
   given in Matlab, but it’s fairly easy to convert it to Python. This test
   suite contains 304 problems with dimensionality varying from 2 to 5.
   8.

   NgLi <http://infinity77.net/go_2021/ngli.html#ngli>: Stemming from the
   work of Chi-Kong Ng and Duan Li, this is a test problem generator for
   unconstrained optimization, but it’s fairly easy to assign bound
   constraints to it. The methodology is described in
   https://www.sciencedirect.com/science/article/pii/S0305054814001774 ,
   while the Matlab source code can be found in
   http://www1.se.cuhk.edu.hk/~ckng/generator/ . I have used the Matlab
   script to generate 240 problems with dimensionality varying from 2 to 5 by
   outputting the generator parameters in text files, then used Python to
   create the objective functions based on those parameters and the benchmark
   methodology.
   9.

   MPM2 <http://infinity77.net/go_2021/mpm2.html#mpm2>: Implementing the
   “Multiple Peaks Model 2”, there is a Python implementation at
   https://github.com/jakobbossek/smoof/blob/master/inst/mpm2.py . This is
   a test problem generator also used in the smoof
   <https://github.com/jakobbossek/smoof> library, I have taken the code
   almost as is and generated 480 benchmark functions with dimensionality
   varying from 2 to 5.
   10.

   RandomFields
   <http://infinity77.net/go_2021/randomfields.html#randomfields>: as
   described in
   https://www.researchgate.net/publication/301940420_Global_optimization_test_problems_based_on_random_field_composition
,
   it generates benchmark functions by “smoothing” one or more
   multidimensional discrete random fields and composing them. No source code
   is given, but the implementation is fairly straightforward from the article
   itself.
   11.

   NIST <http://infinity77.net/go_2021/nist.html#nist>: not exactly the
   realm of Global Optimization solvers, but the NIST StRD
   <https://www.itl.nist.gov/div898/strd/nls/nls_main.shtml> dataset can be
   used to generate a single objective function as “sum of squares”. I have
   used the NIST dataset as implemented in lmfit
   <https://github.com/lmfit/lmfit-py/tree/master/NIST_STRD>, thus creating
   27 test problems with dimensionality ranging from 2 to 9.
   12.

   GlobalLib <http://infinity77.net/go_2021/globallib.html#globallib>:
   Arnold Neumaier maintains
   <https://www.mat.univie.ac.at/~neum/glopt/coconut/Benchmark/Benchmark.html>
a
   suite of test problems termed “COCONUT Benchmark” and Sahinidis has
   converted the GlobalLib and PricentonLib AMPL/GAMS dataset into C/Fortran
   code (http://archimedes.cheme.cmu.edu/?q=dfocomp ). I have used a simple
   C parser to convert the benchmarks from C to Python.

   The global minima are taken from Sahinidis
   <http://thales.cheme.cmu.edu/dfo/comparison/comp.html> or from Neumaier
   or refined using the NEOS server <https://neos-server.org/neos/> when
   the accuracy of the reported minima is too low. The suite contains 181 test
   functions with dimensionality varying between 2 and 9.
   13.

   CVMG <http://infinity77.net/go_2021/cvmg.html#cvmg>: another “landscape
   generator”, I had to dig it out using the Wayback Machine at
   http://web.archive.org/web/20100612044104/https://www.cs.uwyo.edu/~wspears/multi.kennedy.html
,
   the acronym stands for “Continuous Valued Multimodality Generator”. Source
   code is in C++ but it’s fairly easy to port it to Python. In addition to
   the original implementation (that uses the Sigmoid
   <https://en.wikipedia.org/wiki/Sigmoid_function> as a
   softmax/transformation function) I have added a few others to create varied
   landscapes. 360 test problems have been generated, with dimensionality
   ranging from 2 to 5.
   14.

   NLSE <http://infinity77.net/go_2021/nlse.html#nlse>: again, not really
   the realm of Global optimization solvers, but Nonlinear Systems of
   Equations can be transformed to single objective functions to optimize. I
   have drawn from many different sources (Publications
   <https://www.sciencedirect.com/science/article/pii/S0898122111004299>,
   ALIAS/COPRIN
   <http://www-sop.inria.fr/coprin/logiciels/ALIAS/Benches/node1.html#SECTION000117000000000000000>
and
   many others) to create 44 systems of nonlinear equations with
   dimensionality ranging from 2 to 8.
   15.

   Schoen <http://infinity77.net/go_2021/schoen.html#schoen>: based on the
   early work of Fabio Schoen and his short note
   <https://link.springer.com/article/10.1007%2FBF01096734> on a simple but
   interesting idea on a test function generator, I have taken the C code in
   the note and converted it into Python, thus creating 285 benchmark
   functions with dimensionality ranging from 2 to 6.

   *Many thanks* go to Professor Fabio Schoen for providing an updated copy
   of the source code and for the email communications.
   16.

   Robust <http://infinity77.net/go_2021/robust.html#robust>: the last
   benchmark test suite for this exercise, it is actually composed of 5
   different kind-of analytical test function generators, containing
   deceptive, multimodal, flat functions depending on the settings. Matlab
   source code is available at http://www.alimirjalili.com/RO.html , I
   simply converted it to Python and created 420 benchmark functions with
   dimensionality ranging from 2 to 6.


Enjoy, and Happy 2021 :-) .


Andrea.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.python.org/pipermail/scipy-dev/attachments/20210108/088e58c7/attachment-0001.html>


More information about the SciPy-Dev mailing list