[SciPy-Dev] optimize ncg approx_fhess_p bug?

josef.pktd at gmail.com josef.pktd at gmail.com
Tue Apr 6 10:56:57 EDT 2010


On Tue, Apr 6, 2010 at 9:00 AM,  <josef.pktd at gmail.com> wrote:
> On Tue, Apr 6, 2010 at 1:15 AM,  <josef.pktd at gmail.com> wrote:
>> is this a bug?
>>
>> why is there a  `*p` in the approx_fhess
>>
>> def approx_fhess_p(x0,p,fprime,epsilon,*args):
>>    f2 = fprime(*((x0+epsilon*p,)+args))
>>    f1 = fprime(*((x0,)+args))
>>    return (f2 - f1)/epsilon
>>
>> from a quick search this function is only used in fmin_ncg, which has
>> convergence problem if epsilon is small.
>> convergence to solution looks ok when I set epsilon to 1e-4 or 1e-6.
>>
>> but with predefined epsilon the step in the approx hessian calculation
>> looks to small:
>>
>>>>> t.grad(t.solution)*_epsilon
>> array([  0.00000000e+00,  -7.71175966e-16,  -6.94483304e-15])
>>
>> non-convergence was a test failure reported by Alan with scipy 0.7.2
>>
>> I'm not sure about this because I never looked at the optimize code
>> before, and I only did some quick checking.
>>
>> dropping the p in the approx fhess, I get a very fast solution
>> (full_output) of the test problem in optimize/tests/optimize.py
>> (array([  1.46693016e-07,  -5.24869440e-01,   4.87527692e-01]), 1.55913216727411
>> 23, 5, 18, 0, 0)
>
> Correction
> I think we can ignore this, I made some mistakes with my changes (too
> late at night).
>
> The p is necessary to produce correct results, and I cannot reproduce
> the convergence failure on my computer  today.

there is still something strange:
I get the convergence failure when I use nosetests directly, but not
when I run the testfile (which uses nosetests through numpy.testing)

>nosetests --version
nosetests-script.py version 0.11.1


C:\Josef\eclipsegworkspace\scipy-trunk-new\scipy\optimize\tests>test_optimize.py

..........
----------------------------------------------------------------------
Ran 10 tests in 0.047s

OK

C:\Josef\eclipsegworkspace\scipy-trunk-new\scipy\optimize\tests>nosetests test_o
ptimize.py
......E...
======================================================================
ERROR: line-search Newton conjugate gradient optimization routine
----------------------------------------------------------------------
Traceback (most recent call last):
  File "C:\Josef\eclipsegworkspace\scipy-trunk-new\scipy\optimize\tests\test_opt
imize.py", line 114, in test_ncg
    retall=False)
  File "c:\josef\_progs\subversion\scipy-trunk_after\trunk\dist\scipy-0.8.0.dev6
156.win32\programs\python25\lib\site-packages\scipy\optimize\optimize.py", line
1095, in fmin_ncg
    alphak, fc, gc, old_fval = line_search_BFGS(f,xk,pk,gfk,old_fval)
  File "c:\josef\_progs\subversion\scipy-trunk_after\trunk\dist\scipy-0.8.0.dev6
156.win32\programs\python25\lib\site-packages\scipy\optimize\optimize.py", line
597, in line_search_BFGS
    phi_a2 = f(*((xk+alpha2*pk,)+args))
  File "c:\josef\_progs\subversion\scipy-trunk_after\trunk\dist\scipy-0.8.0.dev6
156.win32\programs\python25\lib\site-packages\scipy\optimize\optimize.py", line
103, in function_wrapper
    return function(x, *args)
  File "C:\Josef\eclipsegworkspace\scipy-trunk-new\scipy\optimize\tests\test_opt
imize.py", line 40, in func
    raise RuntimeError, "too many iterations in optimization routine"
RuntimeError: too many iterations in optimization routine

----------------------------------------------------------------------
Ran 10 tests in 0.891s

FAILED (errors=1)



>
> Josef
>
>>
>> Any ideas?
>>
>> Josef
>>
>



More information about the SciPy-Dev mailing list