[SciPy-User] Sometimes fmin_l_bfgs_b tests NaN parameters and then fails to converge

Yury V. Zaytsev yury at shurup.com
Mon Jan 3 08:57:32 EST 2011


Hi Skipper!

On Sat, 2011-01-01 at 11:38 -0500, Skipper Seabold wrote:

> I don't see the NaNs (on 64-bit). 

Actually, I've finally got NaNs on 64-bit also, when trying tnc instead
of bfgs with my test script, which made me think that it's not so much
platform-specific. I could get rid of NaNs on both platforms by
replacing np.inf with a sufficiently large number, such as 100. 

So I think the fact that I had NaNs on one platform and not another is
probably due to subtile differences in function values which might
depend on the version of the libraries, machine precision and what
not...

My conclusion is that NaNs come out, when you have sharp jumps with
respect to some of the parameters which make the function
non-differentiable.

On the other hand what else I can do if the values outside of the
parameter range go to infinity and I explicitly told the optimizer not
to go there?

I have a feeling (which needs more debugging in order to be confirmed)
that bfgs does not actually respect the boundaries that I have
specified. Did anyone else run into this issue?

> Curious if this approach might work for you.

I have replaced tnc with bfgs on in my production code and tried to
tweak eta values as you suggested, but I just can't get it to converge
at all.

BFGS tries out different parameters, then, goes to the very edge of the
defined boundaries, gets np.inf as a result, tries out NaNs and then
comes back and converges somewhere in the acceptable parameter range.

TNC does the same but gets stuck constantly trying out NaNs, no matter
which eta values I take.

That's where I am now...
 
-- 
Sincerely yours,
Yury V. Zaytsev




More information about the SciPy-User mailing list