[SciPy-Dev] minimizer benchmark

Geoff Oxberry goxberry at gmail.com
Mon Nov 18 11:12:11 EST 2013


Would you be interested in more benchmarking problems? There should be a
couple lists of problems within the thread for
http://scicomp.stackexchange.com/questions/46/where-can-one-obtain-good-data-sets-test-problems-for-testing-algorithms-routine.
(Disclaimer: I've moderated Computational Science Stack Exchange for a
couple years with the goal of answering questions like, "Where do I find
benchmark problems for numerical optimization?") The problem of
benchmarking within optimization is pretty common, so you should be able to
develop a fairly comprehensive suite of tests if you so desire.

Geoff


On Mon, Nov 18, 2013 at 8:05 AM, alex <argriffi at ncsu.edu> wrote:

> On Mon, Nov 18, 2013 at 10:20 AM, Jacob Stevenson
> <jstevenson131 at gmail.com> wrote:
> > Hi Everyone, it seemed to me that the folder scipy/optimize/benchmarks/
> > was rather lonely, with only one file in it, so I wrote a script which
> > benchmarks the scipy minimizers.  The script simpy runs all the
> > optimizers on the Rosenbrock function and prints various information.
> > Here's the output sorted by minimization time
> >
> > Optimizer benchmark on the Rosenbrock function sorted by time
> >         L-BFGS-B pass nfev    6 njev    0 nhev    0 time 0.000691891
> >              TNC pass nfev   13 njev    0 nhev    0 time 0.00105786
> >        Newton-CG pass nfev   17 njev   27 nhev   11 time 0.00400996
> >            SLSQP pass nfev   41 njev   27 nhev    0 time 0.00410509
> >        trust-ncg pass nfev   18 njev   16 nhev   15 time 0.00415802
> >           dogleg pass nfev   16 njev   14 nhev   13 time 0.00426602
> >               CG pass nfev   63 njev   63 nhev    0 time 0.0065279
> >             BFGS pass nfev   44 njev   44 nhev    0 time 0.0070231
> >           Powell pass nfev  524 njev    0 nhev    0 time 0.0262001
> >           COBYLA fail nfev 1000 njev    0 nhev    0 time 0.026603
> >
> > The results are interesting with L-BFGS-B outperforming all the others
> > by a significant margin in both total time and total number of function
> > calls.
> >
> > I have not submitted a pull request because I have no idea how to fit
> > what I've done into an existing benchmarking framework (if there is
> > one).  I will submit the pull request if there is interest.  In my
> > opinion it would be really useful to have a long set of benchmarks to
> > see how all the minimizers perform on different types of minimization
> > problems.
> >
> > Here is the script in my scipy fork
> >
> >
> https://github.com/js850/scipy/blob/benchmarks/scipy/optimize/benchmarks/bench_optimizers.py
> >
> > Best wishes,
> > Jake
>
> I'll make a couple of comments, before I forget everything I learned
> about Rosenbrock and about the scipy optimization framework when I
> implemented the dogleg and trust-ncg functions from the Nocedal and
> Wright book...
>
> First, thanks for adding these benchmarks!  This seems like a natural
> comparison to have available in scipy, and I suspect that this has not
> been done earlier because Denis has only recently created a general
> framework that allows interchangeable minimization functions.  I hope
> that something like this will go into scipy.
>
> I've written similar one-off benchmarking code for algopy
>
> https://github.com/b45ch1/algopy/tree/master/documentation/sphinx/examples/minimization
> before I'd started contributing to scipy.  This includes the
> Rosenbrock function as well as a couple of other good benchmarking
> functions, and easy and hard starting guesses for each; the licenses
> are compatible so maybe this would be useful for future scipy
> minimize() benchmarks.
>
> Benchmarks are an endless source of nitpicking, so I'll mention two
> nitpicks for your code :)  First, the different minimize() functions
> might differ in how close they get to the optimum, before they report
> that they have successfully converged.  The nth dimensional rosenbrock
> function has its minimum at (1, 1, ..., 1) so this should be not so
> hard to compare closeness.  Second and possibly more importantly, the
> relative performances may change depending on whether the starting
> guess is easy vs. hard.  Your starting point x0=[0.8, 1.2, 0.7] is
> relatively near (1, 1, 1) so it is "easy".  Starting points that
> require walking around the Rosenbrock valley
> http://en.wikipedia.org/wiki/File:Rosenbrock_function.svg are more
> tricky, and the less clever minimization functions may outperform the
> more clever functions at easy starting guesses.
>
> Best,
> Alex
> _______________________________________________
> SciPy-Dev mailing list
> SciPy-Dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-dev
>



-- 
Geoffrey Oxberry, Ph.D., E.I.T.
goxberry at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scipy-dev/attachments/20131118/ca3505a8/attachment.html>


More information about the SciPy-Dev mailing list