[SciPy-Dev] CUTEst in Scipy

Ralf Gommers ralf.gommers at gmail.com
Wed Mar 29 05:38:28 EDT 2017


On Mon, Mar 27, 2017 at 2:18 AM, Nikolay Mayorov <nikolay.mayorov at zoho.com>
wrote:

> Hi, Ralf!
>
> Why do you say that? The optimize benchmarks are quite extensive, and came
> from http://infinity77.net/global_optimization/index.html which is doing
> exactly what you say our asv benchmarks are not adequate for. If it's about
> usability/runtime, how about just fixing that? That plus adding some
> problems relevant for Antonio's GSoC project seems to make the most sense
> to me.
> Ralf
>
>
> I thought about it later and realized that I didn't think through my
> answer properly (but later neglected to update it). I see that a large
> collection of global optimization problems was introduced, so asv approach
> works. I just see optimization benchmarks somewhat different. I believe
> they should be more user facing and enhance scipy.optimize module itself by
> providing examples and the to quickly run different methods with different
> settings. Now the benchmarks is completely internal thing and not very
> exciting for people to try them (in my opinion). Additionally they are not
> flexible as they fix most of the method parameters, i. e. it's better to
> decouple problem formulations with optimization algorithms and their
> settings.
>

That's a good point. It would indeed be nice to take a representative
subset of problem functions and expose them as `optimize.something`. Then
they can be used from both the asv benchmarks and by users.


> Other than that if I'm not mistaken ASV still requires to run benchmarks
> multiple times to measure different metrics (like number of function
> evaluations and Jacobian evaluations), it is probably the main obvious
> shortcoming.
>

This is about `result_type = ["average time", "nfev", "success"]` right?
That construction is a bit odd, there probably isn't a fundamental reason
why that can't be fixed.

Also I'm not sure how "alive" the benchmarks code and how often it is used,
> for example BenchGlobal doesn't seem to work for me with the current ASV
> master.
>

The problem with the optimize benchmarks is that they're so extensive that
they take forever, and hence are excluded from the default benchmark run
(IIRC).

>
> So my suggestion is to use ASV as a tool to maintain the library quality
> (similar to unit tests), but not to try put everything in here and maybe
> automate its runs on Travis. Specifically for scipy.optimize I suggest to
> have a submodule like "Benchmarks" or "Test problems" or "Example problems"
> which will contain problem definitions and some tools to run a selected
> subset for specific solvers and options. These problems can be used then in
> unit tests and maybe even in ASV benchmarks. There is "Rosenbrock function"
> already, but it is just not enough. My guess that it will be more
> convenient (compared to ASV) when making enhancement to the old code or
> working on new solvers and additionally will serve as a documentation
> improvement. I believe it will be good to have such thing.
>

Agreed. The functions should be selected carefully, it can't be 100+ funcs
I think (too much work/maintenance).

Ralf




> What do you think?
>
> Nikolay.
>
>
> (when replying to a thread, manually change scipy-dev at scipy.org to
> scipy-dev at python.org)
>
>
>
> On Tue, Mar 21, 2017 at 8:26 AM, Nikolay Mayorov <nikolay.mayorov at zoho.com
> > wrote:
>
>
> Hi!
>
> I heard about CUTEst, but it looked quite complex so I decided not to deal
> with it. To your options:
>
> 1) It might be the most practical option, considering you'll have other
> (main) parts of work to do. Also it's not clear whether to include test
> problems in scipy or store them somewhere else. My opinion is that having a
> collection of problems right in scipy.optimize is good. I don't exactly
> remember why we decided not to include test problems for least_squares, I
> guess because it required many considerations. In theory there are ASV
> benchmarks in scipy, but they don't feel adequate for benchmarking
> optimization problems.
>
>
> Why do you say that? The optimize benchmarks are quite extensive, and came
> from http://infinity77.net/global_optimization/index.html which is doing
> exactly what you say our asv benchmarks are not adequate for. If it's about
> usability/runtime, how about just fixing that? That plus adding some
> problems relevant for Antonio's GSoC project seems to make the most sense
> to me.
> Ralf
>
>
> So developing benchmarking infrastructure in scipy.optimize might be a
> good side project, my concern is that it will distract you from the main
> path greatly.
>
> To sum up: the simplest way is to pick some problems, write your own
> benchmark suite and store it somewhere else. This way you will be able to
> focus on the algorithms. Other options are possible if you have a really
> good idea how to do them and ready to devote time to it.
>
> 2) My understanding is that you don't need any permissions as you won't
> use CUTEst code (someone could correct me). As I see it: your add an
> interface into scipy, a user install CUTEst and use this interface to work
> with CUTEst. It doesn't seem very practical, because installing CUTEst
> looks like a pain. Do you agree or I am mistaken?
>
> 3) I guess you can ask an author to reuse his code in scipy to implement
> an interface to CUTEst.
>
> Generally about 2 and 3: it might be a good idea to use CUTEst for your
> internal benchmarking during the development and optionally you can add an
> interface to CUTEst into scipy. But I think it will be hardly ever used
> after that.
>
> To sum the idea: adding CUTEst dependency for scipy.optimize benchmarking
> seems unpractical. It would be useful if you will actually try to work with
> CUTEst, maybe it will turn out to be not that difficult. In such case many
> of my points are not valid.
>
> I hope you'll be able to make some sense from my rambling.
>
> Nikolay.
>
>
> ---- On Mon, 20 Mar 2017 23:24:14 +0500 *Antonio Ribeiro
> <antonior92 at gmail.com <antonior92 at gmail.com>>* wrote ----
>
> Hello,
>
> I am developing my google of summer proposal about constrained
> optimisation in Scipy
> and it will be very important to have a good set of benchmarks.
>
> There is a great library with diverse optimisation benchmarks called
> CUTEst <https://ccpforge.cse.rl.ac.uk/gf/project/cutest/wiki/>. It is
> under LGPL 2.1 license.
>
> This CUTEst  library include a huge amount of problem sets
> and is often used in optimisation papers. Furthermore, many of the
> available
> optimisation software provide some interface to it. I am interested in
> using problems from this set in my project and I want to ask how
>  should I proceed?
>
> 1) Select a subset of tests from the CUTEst library and implement them in
> native python under scipy.
>
> 2) Include some interface to CUTEst in scipy. By what I understand LGPL
> license is
> more restrictive than BSD-3 used by Scipy. In this case, could we ask for
> permission?
>
> 3) There is an interface for CUTEst implemented in the library pyOpus
> (under GPL3 license)
> <http://fides.fe.uni-lj.si/pyopus/download1.html>.  Since this is a
> library external to Scipy (with an incompatible license) how should I
> proceed in this case: can I make my benchmarks dependent on it? Can we ask
> permission for include this interface in Scipy?
>
> Any suggestions on how to proceed and what to include in my proposal is
> very welcome.
>
> Antônio
> _______________________________________________
> SciPy-Dev mailing list
> SciPy-Dev at scipy.org
> https://mail.scipy.org/mailman/listinfo/scipy-dev
>
>
>
> _______________________________________________
> SciPy-Dev mailing list
> SciPy-Dev at scipy.org
> https://mail.scipy.org/mailman/listinfo/scipy-dev
>
>
> _______________________________________________
> SciPy-Dev mailing list
> SciPy-Dev at python.org
> https://mail.python.org/mailman/listinfo/scipy-dev
>
>
>
> _______________________________________________
> SciPy-Dev mailing list
> SciPy-Dev at python.org
> https://mail.python.org/mailman/listinfo/scipy-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scipy-dev/attachments/20170329/cace3e17/attachment-0001.html>


More information about the SciPy-Dev mailing list