[SciPy-Dev] Proposal - optimize callback given a state keyword

Antonio Ribeiro antonior92 at gmail.com
Thu Feb 8 12:22:02 EST 2018


Hi All,

I have a few comments regarding the newcallback:

1.  I have started a PR changing this: https://github.com/scipy/scipy/pull/7425 <https://github.com/scipy/scipy/pull/7425>. By what I remember you are helping with it as well Andrew. I never finished because I gave priority to other projects, but I may go back to work on it after finishing PR 8328.
2. By what I remember the methods that could easily benefit from the modification are: Powell, Nelder-Mead, CG, BFGS, Newton-CG, trust-ncg, trust-exact, dogleg.
3. I think we should allow the termination of the optimization solver by returning `True` (or False) in the callback function.
4. For the solver me and Nikolay are working right now (https://github.com/scipy/scipy/pull/8328) this option is already implemented. Please take a look at the new `minimize` documentation after the modifications we have introduced:
https://5466-1460385-gh.circle-artifacts.com/0/home/ubuntu/scipy/doc/build/html-scipyorg/generated/scipy.optimize.minimize.html#scipy.optimize.minimize <https://5466-1460385-gh.circle-artifacts.com/0/home/ubuntu/scipy/doc/build/html-scipyorg/generated/scipy.optimize.minimize.html#scipy.optimize.minimize>

Best,
Antônio

> On Feb 8, 2018, at 15:00, scipy-dev-request at python.org wrote:
> 
> Send SciPy-Dev mailing list submissions to
> 	scipy-dev at python.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> 	https://mail.python.org/mailman/listinfo/scipy-dev
> or, via email, send a message with subject or body 'help' to
> 	scipy-dev-request at python.org
> 
> You can reach the person managing the list at
> 	scipy-dev-owner at python.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of SciPy-Dev digest..."
> 
> 
> Today's Topics:
> 
>   1. Proposal - optimize callback given a state keyword (Andrew Nelson)
>   2. Re: Proposal - optimize callback given a state keyword
>      (Jaime Fern?ndez del R?o)
>   3. Re: Proposal - optimize callback given a state keyword
>      (Joshua Wilson)
> 
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Thu, 8 Feb 2018 13:06:19 +1100
> From: Andrew Nelson <andyfaff at gmail.com>
> To: scipy-dev <scipy-dev at python.org>
> Subject: [SciPy-Dev] Proposal - optimize callback given a state
> 	keyword
> Message-ID:
> 	<CAAbtOZeufOk-gn7yB8565riRd30m+ReJ+_jGGPYBGngLVsj-Zg at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
> 
> Hi all,
> I propose that the optimize callbacks be asked to accept a new state
> keyword in the future.
> The state keyword will be an `OptimizeResult` giving the state of the
> optimizer at that point.
> i.e. n_iterations, n_fun, (n_jac?), x, fun, etc.
> 
> To aid the transition I propose that the user callbacks are wrapped with
> something along the lines of:
> 
> class OptimizerCallback(object):
>    def __init__(self, callback):
>        self.callback = callback
>       argspec = _getargspec(callback)
>       self.accepts_state = True
>       if 'state' not in argspec.varkw:
>           warnings.warn(FutureWarning, "In future versions of scipy the"
>                                                             " optimizer
> callback should accept"
>                                                             "a state kwd")
>       self.accepts_state = False
> 
>    def __call__(self, x, state):
>        if self.accepts_state:
>            self.callback(x, state)
>        else:
>            self.callback(x)
> 
> and the optimizers be modified appropriately.
> 
> Thoughts, comments?
> 
> 
> 
> _____________________________________
> Dr. Andrew Nelson
> 
> 
> _____________________________________
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://mail.python.org/pipermail/scipy-dev/attachments/20180208/acace517/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 2
> Date: Thu, 08 Feb 2018 12:37:35 +0000
> From: Jaime Fern?ndez del R?o <jaime.frio at gmail.com>
> To: SciPy Developers List <scipy-dev at python.org>
> Subject: Re: [SciPy-Dev] Proposal - optimize callback given a state
> 	keyword
> Message-ID:
> 	<CAPOWHWmz91LsM93MThKdWc4+XnYZpxyjUGnZ+Huq+eiPHUJpNQ at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
> 
> On Thu, Feb 8, 2018 at 3:06 AM Andrew Nelson <andyfaff at gmail.com> wrote:
>> 
>> Hi all,
>> I propose that the optimize callbacks be asked to accept a new state
> keyword in the future.
>> The state keyword will be an `OptimizeResult` giving the state of the
> optimizer at that point.
>> i.e. n_iterations, n_fun, (n_jac?), x, fun, etc.
>> 
>> To aid the transition I propose that the user callbacks are wrapped with
> something along the lines of:
>> 
>> class OptimizerCallback(object):
>>    def __init__(self, callback):
>>        self.callback = callback
>>       argspec = _getargspec(callback)
>>       self.accepts_state = True
>>       if 'state' not in argspec.varkw:
>>           warnings.warn(FutureWarning, "In future versions of scipy the"
>>                                                             " optimizer
> callback should accept"
>>                                                             "a state
> kwd")
>>       self.accepts_state = False
>> 
>>    def __call__(self, x, state):
>>        if self.accepts_state:
>>            self.callback(x, state)
>>        else:
>>            self.callback(x)
>> 
>> and the optimizers be modified appropriately.
>> 
>> Thoughts, comments?
> 
> If we do implement callback on steroids, I'd also suggest:
> 
>   - Letting that class wrap a None (which would make calling it a no-op)
>   would allow us to get rid of all those `if callback is not None:` in the
>   optimizer loops.
>   - Another interesting addition for such an OptimizeResult would be to
>   make the typical stopping conditions (gradient norm, relative function
>   change...) available.
>   - This would be especially useful if we allowed the callback to
>   terminate the optimization early, e.g. by returning False instead of None.
> 
> While I think all of this would be a step in the right direction, I wonder
> if cramming more functionality into the callback is the way to go. After a
> few months of using optimize A LOT at work, I am pretty much convinced that
> the architecture of the library is,, let's call it suboptimal... I haven't
> fully fleshed it out, but I think it would be much better if, instead of
> black box functions that run through to convergence, with only the callback
> function to shed light on what may be going on, we exposed some form of
> Optimizer object, that would know how to perform a single optimization
> step. There should still be a form of automatically running these to
> convergence, but it would allow advanced users to do things that now are
> impossible, like e.g. running two different algorithms on the same problem
> with the exact same termination conditions.
> 
> Jaime
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://mail.python.org/pipermail/scipy-dev/attachments/20180208/1c44da65/attachment-0001.html>
> 
> ------------------------------
> 
> Message: 3
> Date: Thu, 8 Feb 2018 08:12:44 -0600
> From: Joshua Wilson <josh.craig.wilson at gmail.com>
> To: SciPy Developers List <scipy-dev at python.org>
> Subject: Re: [SciPy-Dev] Proposal - optimize callback given a state
> 	keyword
> Message-ID:
> 	<CAKFGQGxFsq7rCJ20Wy63069FWqYq3UPevtxGBiccduf-TKmUow at mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
> 
>> we exposed some form of Optimizer object, that would know how to perform a single optimization step
> 
> I think that's a wonderful idea, and one that has a lot of historical
> precedent since it's generally how Fortran optimizers worked because
> there were no function pointers. (Which conveniently means a lot of
> things in SciPy work that way under-the-hood already.)
> 
> Just a random comment from the peanut gallery,
> Josh
> 
> On Thu, Feb 8, 2018 at 6:37 AM, Jaime Fern?ndez del R?o
> <jaime.frio at gmail.com> wrote:
>> On Thu, Feb 8, 2018 at 3:06 AM Andrew Nelson <andyfaff at gmail.com> wrote:
>>> 
>>> Hi all,
>>> I propose that the optimize callbacks be asked to accept a new state
>>> keyword in the future.
>>> The state keyword will be an `OptimizeResult` giving the state of the
>>> optimizer at that point.
>>> i.e. n_iterations, n_fun, (n_jac?), x, fun, etc.
>>> 
>>> To aid the transition I propose that the user callbacks are wrapped with
>>> something along the lines of:
>>> 
>>> class OptimizerCallback(object):
>>>    def __init__(self, callback):
>>>        self.callback = callback
>>>       argspec = _getargspec(callback)
>>>       self.accepts_state = True
>>>       if 'state' not in argspec.varkw:
>>>           warnings.warn(FutureWarning, "In future versions of scipy the"
>>>                                                             " optimizer
>>> callback should accept"
>>>                                                             "a state
>>> kwd")
>>>       self.accepts_state = False
>>> 
>>>    def __call__(self, x, state):
>>>        if self.accepts_state:
>>>            self.callback(x, state)
>>>        else:
>>>            self.callback(x)
>>> 
>>> and the optimizers be modified appropriately.
>>> 
>>> Thoughts, comments?
>> 
>> If we do implement callback on steroids, I'd also suggest:
>> 
>> Letting that class wrap a None (which would make calling it a no-op) would
>> allow us to get rid of all those `if callback is not None:` in the optimizer
>> loops.
>> Another interesting addition for such an OptimizeResult would be to make the
>> typical stopping conditions (gradient norm, relative function change...)
>> available.
>> This would be especially useful if we allowed the callback to terminate the
>> optimization early, e.g. by returning False instead of None.
>> 
>> While I think all of this would be a step in the right direction, I wonder
>> if cramming more functionality into the callback is the way to go. After a
>> few months of using optimize A LOT at work, I am pretty much convinced that
>> the architecture of the library is,, let's call it suboptimal... I haven't
>> fully fleshed it out, but I think it would be much better if, instead of
>> black box functions that run through to convergence, with only the callback
>> function to shed light on what may be going on, we exposed some form of
>> Optimizer object, that would know how to perform a single optimization step.
>> There should still be a form of automatically running these to convergence,
>> but it would allow advanced users to do things that now are impossible, like
>> e.g. running two different algorithms on the same problem with the exact
>> same termination conditions.
>> 
>> Jaime
>> 
>> _______________________________________________
>> SciPy-Dev mailing list
>> SciPy-Dev at python.org
>> https://mail.python.org/mailman/listinfo/scipy-dev
>> 
> 
> 
> ------------------------------
> 
> Subject: Digest Footer
> 
> _______________________________________________
> SciPy-Dev mailing list
> SciPy-Dev at python.org
> https://mail.python.org/mailman/listinfo/scipy-dev
> 
> 
> ------------------------------
> 
> End of SciPy-Dev Digest, Vol 172, Issue 7
> *****************************************

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scipy-dev/attachments/20180208/ac09566e/attachment-0001.html>


More information about the SciPy-Dev mailing list