[SciPy-User] beginner's question regarding optimize.fmin_l_bfgs_b

Tveraa, Torkild Torkild.Tveraa at nina.no
Fri Oct 15 19:45:17 EDT 2010


-----Original Message-----
From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Skipper Seabold
Sent: 15. oktober 2010 17:54
To: SciPy Users List
Subject: Re: [SciPy-User] beginner's question regarding optimize.fmin_l_bfgs_b

On Fri, Oct 15, 2010 at 4:31 AM, Sebastian Haase <seb.haase at gmail.com> wrote:
> On Thu, Oct 14, 2010 at 11:58 PM, Skipper Seabold <jsseabold at gmail.com> wrote:
>> On Thu, Oct 14, 2010 at 5:32 PM, Skipper Seabold <jsseabold at gmail.com> wrote:
>>> On Thu, Oct 14, 2010 at 5:21 PM, Skipper Seabold <jsseabold at gmail.com> wrote:
>>>> On Tue, Oct 12, 2010 at 9:10 AM, Tveraa, Torkild <Torkild.Tveraa at nina.no> wrote:
>>>>> Dear All,
>>>>>
>>>>> I have been able to use the optimize.leastsq - module to minimize a given function (see below), but since my data is sparse I have convergence problems and would ideally be able to put bounds on the parameters. If I have understood this correctly this can be done with the optimize.fmin_l_bfgs_b - module, but I am unable to figure out how to do this. Some helps & hints would be most appreciated :-)
>>>>>
>>>>>        Cheers,
>>>>>        Torkild
>>>>>
>>>>> -------------------------------------------------------
>>>>> import numpy
>>>>> import pylab
>>>>> from scipy import *
>>>>> from scipy import optimize
>>>>>
>>>>> ## This is y-data:
>>>>> y_data = (([0.2867, 0.1171, -0.0087, 0.1326, 0.2415, 0.2878, 0.3133, 0.3701, 0.3996, 0.3728, 0.3551, 0.3587, 0.1408, 0.0416, 0.0708, 0.1142, 0, 0, 0]))
>>>>>
>>>>> ## This is x-data:
>>>>> t = (([67, 88, 104, 127, 138, 160, 169, 188, 196, 215, 240, 247, 271, 278, 303, 305, 321, 337, 353]))
>>>>>
>>>>> ## This is the equation:
>>>>> fitfunc = lambda p, x:    p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(t-p[3])))) + (1/(1+exp(p[4]*(t-p[5])))) -1)
>>>>>
>>>>> ##
>>>>> errfunc = lambda p, x, y: fitfunc(p,x) -y
>>>>>
>>>>> guess = [0, max(y_data), 0.1, 140, -0.1, 270]
>>>>>
>>>>> bounds = [(-0.2, 0.1),(0.1,0.97), (0.05,0.8), (120,190), (-0.8, -0.05), (200,300) ]
>>>>>
>>>>> ## This seems to work ok:
>>>>> p2,success = optimize.leastsq(errfunc, guess, args=(t, y_data),full_output=0)
>>>>> print 'Estimates from leastsq \n', p2,success
>>>>>
>>>>>
>>>>> ## But this does not:
>>>>> best, val, d = optimize.fmin_l_bfgs_b(errfunc, guess, bounds=bounds, args=(t, y_data), iprint=2)
>>>>
>>>> The minimization routines, I believe, in fmin expect a function that
>>>> maps from to a scalar.  So you need to tell fmin_l_bfgs that you want
>>>> to minimize the sum of squared errors, optimze.leastsq assumes this.
>>>> So just define one more function that sums the squared errors and
>>>> minimize it
>>>>
>>>> errfuncsumsq = lambda p, x, y: np.sum(errfunc(p,x,y)**2)
>>>>
>>>> Now, run it without bounds to make sure we get the same thing
>>>>
>>>> boundsnone = [(None,None)]*6
>>>>
>>>> Notice that you also have to tell fmin_l_bfgs_b to approximate the
>>>> gradient or else it assumes that your objective function also returns
>>>> its gradient
>>>>
>>>> best, val, d = optimize.fmin_l_bfgs_b(errfuncsum, guess,
>>>> approx_grad=True, bounds=boundsnone, args=(t, y_data), iprint=2)
>>>>
>>>> p2
>>>> array([  6.79548883e-02,   3.68922503e-01,   7.55565728e-02,
>>>>         1.41378227e+02,   2.91307814e+00,   2.70608242e+02])
>>>>
>>>> best
>>>> array([  6.79585333e-02,  -2.33026316e-01,  -7.55409880e-02,
>>>>         1.41388265e+02,  -1.36069434e+00,   2.70160779e+02])
>>>>
>>>
>>> I just realized that these don't come up with the same thing.  I don't
>>> have an answer for why yet.
>>>
>>> Skipper
>>>
>>
>> Oh,
>>
>> ret = optimize.leastsq(errfunc, guess, args=(t,y_data))
>>
>> ret2 = optimize.fmin_l_bfgs_b(errfuncsumsq, guess, approx_grad=True,
>> bounds=boundsnone, args=(t, y_data), iprint=2)
>>
>> fitfunc(ret[0],t)
>> array([ 0.0690421 ,  0.0731951 ,  0.08481868,  0.14388978,  0.199337  ,
>>        0.30971974,  0.33570587,  0.3602918 ,  0.36414477,  0.36777158,
>>        0.36874788,  0.36881958,  0.14080121,  0.06794499,  0.06795339,
>>        0.0679536 ,  0.0679545 ,  0.06795477,  0.06795485])
>>
>> fitfunc(ret2[0],t)
>> array([ 0.06904625,  0.07319943,  0.0848205 ,  0.14386744,  0.19929593,
>>        0.30968735,  0.3356897 ,  0.36029973,  0.3641578 ,  0.36779021,
>>        0.36876834,  0.3688402 ,  0.14077023,  0.06795562,  0.06795703,
>>        0.06795724,  0.06795815,  0.06795842,  0.0679585 ])
>>
>> errfuncsumsq(ret[0], t, y_data)
>> 0.079297668259408899
>>
>> errfuncsumsq(ret2[0], t, y_data)
>> 0.079298042836826454
>>
>
> Very nice example !
> However, is it correct, that *within* the given bounds the result is
> just a constant ?
>>>> ret3 = optimize.fmin_l_bfgs_b(errfuncsumsq, guess, approx_grad=True, bounds=bounds, args=(t, y_data), iprint=2)
>>>> ret3
> ([  1.00000000e-01   1.00000000e-01   5.00000000e-02   1.39979309e+02
>  -5.00000000e-02   2.70003604e+02], 0.55408092, {'warnflag': 0,
> 'task': 'CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL', 'grad':
> array([-4.41837799,  1.03037829,  0.        ,  0.        ,  0.
> ,  0.        ]), 'funcalls': 2})
>>>>
>>>> errfuncsumsq(ret3[0], t, y_data)
> 0.55408092
>>>>
>>>>
>>>> bounds
> [(-0.2, 0.1), (0.1, 0.97), (0.05, 0.8), (120, 190), (-0.8, -0.05), (200, 300)]
>>>> fitfunc(ret3[0],t)
> [ 0.1  0.1  0.1  0.1  0.1  0.1  0.1  0.1  0.1  0.1  0.1  0.1  0.1  0.1  0.1
>  0.1  0.1  0.1  0.1]
>>>>
>
> (Note:  fitfunc above used wronge 't' instead of 'x', correct is:
>>>> fitfunc = lambda p, x:    p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(x-p[3])))) + (1/(1+exp(p[4]*(x-p[5])))) -1)
> )
>

I am far from an expert on (constrained) optimization, but it looks
like given the bounds that the solver doesn't know where to go / the
function is flat in the region of the (bounded) starting parameters.

from scipy.optimize import approx_fprime

ret3 = optimize.fmin_l_bfgs_b(errfuncsumsq, guess, bounds=bounds,
            approx_grad=True, args=(t, y_data), iprint=2)

approx_fprime(ret3[0], errfuncsumsq, 1e-8, *(t, y_data))
array([-4.41837799,  1.03037829,  0.        ,  0.        ,  0.
,  0.        ])

approx_fprime(guess, errfuncsumsq, 1e-8, *(t, y_data))
array([ -1.40430234e+01,   8.29307161e+00,   2.48736942e-01,
         2.06907380e-02,  -1.91057303e+00,  -3.60373953e-03])
_______________________________________________
SciPy-User mailing list
SciPy-User at scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user

A colleague and R-guru came up with what seems to be a solution: 

fitfunc = lambda p, x: (p[0] + (p[1] -p[0]) * ((1/(1+exp(-p[2]*(x-p[3])))) + (1/(1+exp(p[4]*(x-p[5])))) -1))

errfunc = lambda p, y_data: np.sum((y_data-fitfunc(p,x))**2)


best, val, d = optimize.fmin_l_bfgs_b(errfunc, guess,  bounds=bounds, approx_grad=True, args=(y_data,), iprint=-1)

ret = optimize.fmin_l_bfgs_b(errfunc, guess,  bounds=bounds, approx_grad=True, args=(y_data,), iprint=-1)




More information about the SciPy-User mailing list