[SciPy-Dev] (Possible) new optimization routine (2) - scipy.optimize

Ralf Gommers ralf.gommers at gmail.com
Tue Nov 26 16:53:16 EST 2013


On Mon, Nov 18, 2013 at 9:59 PM, Andrea Gavana <andrea.gavana at gmail.com>wrote:

> Hi All,
>
>     since I posted the last time about (possible) new optimization
> routines in scipy.optimize, I have been working here and there in making
> the code for AMPGO (Adaptive Memory Programming for Global Optimization) a
> bit more robust and in expanding the benchmark test suite. Since recently I
> saw a few messages about optimization methods flying around in the mailing
> list, I thought I may share some more findings and possibly (finally) start
> integrating AMPGO into scipy.
>
> First things first: I would love to see AMPGO into scipy, but there are
> still a couple of issues to be solved:
>
> 1. Some of the local optimization methods AMPGO can use (like L-BFGS-B,
> TNC and so on) can take advantage of gradient information, and sometimes
> people do actually have access to the gradient of the objective function.
> The problem is, given the definition of the Tunnelling function used by
> AMPGO:
>
> T(x) = [f(x) - aspiration)**2.0] / prod(dist(s, x)) for s in tabu_list
>
> Where "dist" is the euclidean distance between the current point "x" and
> one of the previous local optima "s" ("tabu_list" is a list containing 2 or
> more of these local optima).
>
> (or see page 4 at
> http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20for%20Constrained%20Global%20Opt%20w%20Lasdon%20et%20al%20.pdffor a clearer formula for the Tunnelling function).
>
> I have absolutely no idea of how to get the analytical expression of the
> gradient of the Tunnelling function, given the gradient of the objective
> function. I'm sure it's very much doable but my calculus skills are way too
> inadequate.
>

That's certainly not trivial. With pen and paper I don't get there. This is
not a show-stopper though, right? Your current benchmarks work just fine....



> 2. As the current code for AMPGO supports local solvers from
> scipy.optimize and OpenOpt, it turns out to be a pain to generalize its
> interface in order for AMPGO to be compatible with the minimize() general
> API.
>

What's the exact problem? The whole idea of minimize() was to make it
easier to support more solvers by providing a uniform signature.


> Not to mention the general PR process.
>

It's not all that hard. Seeing what's you've done with your benchmark site
so far, a PR should be trivial in comparison. If you need some input on
what goes where just point us to your Github repo.


> In any case, I'll try to gather enough willpower to get AMPGO up to scipy
> standards and I'll contribute the benchmarks I created as well.
>

Sounds good!


> So, mentioning the benchmarks and the results, I managed to expand the
> multi-dimensional test suite from 85 to 184 test functions. The test suite
> is now one of the most complete and comprehensive in the Open Source world,
> and it's not in Fortran 77 (luckily enough).
>
> The test suite currently contains:
>
> - 18 one-dimensional test functions with multiple local/global minima;
> - 184 multivariate problems (where the number of independent variables
> ranges from 2 to 17), again with multiple local/global minima.
>
> The main page describing the rules, algorithms and motivation is here:
>
> http://infinity77.net/global_optimization/index.html
>
> A fairly in-depth summary page on AMPGO and sensitivities on its input
> parameters (local solver, tunnelling strategy, etc...):
>
> http://infinity77.net/global_optimization/ampgo.html
>
> Algorithms comparisons:
>
> http://infinity77.net/global_optimization/multidimensional.html
> http://infinity77.net/global_optimization/univariate.html
>
> Test functions and how they rank in a "difficult-to-solve" context:
>
> http://infinity77.net/global_optimization/test_functions.html
>
> The overall conclusion is that AMPGO is superior to all the other
> algorithms I have tried, leaving the second-best (pyasa) behind by a full
> 20% of number of solved problems. It's also the fastest
> (function-evaluation-wise), as it is able to outperform all the other
> algorithms' best results within 200 function evaluations or less (even
> though the other algorithms limit is 2,000).
>

Would you consider Firefly also interesting for scipy because of its higher
success rate on univariate problems?

Cheers,
Ralf

However, to be fair, it is an algorithm designed for low-dimensional
> optimization problems (i.e., 1-20 variables).
>
> If anyone has any suggestion on how to implement my point (1) above,
> please feel free to share your thoughts. I have no clue whatsoever.
>
> Any comment or requests to add additional benchmarks, please give me a
> shout.
>
> Enjoy :-) .
>
> Andrea.
>
> "Imagination Is The Only Weapon In The War Against Reality."
> http://www.infinity77.net
>
> # ------------------------------------------------------------- #
> def ask_mailing_list_support(email):
>
>     if mention_platform_and_version() and include_sample_app():
>         send_message(email)
>     else:
>         install_malware()
>         erase_hard_drives()
> # ------------------------------------------------------------- #
>
> _______________________________________________
> SciPy-Dev mailing list
> SciPy-Dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scipy-dev/attachments/20131126/8b98461d/attachment.html>


More information about the SciPy-Dev mailing list