[SciPy-User] Sometimes fmin_l_bfgs_b tests NaN parameters andthen fails to converge

josef.pktd at gmail.com josef.pktd at gmail.com
Tue Jan 4 15:23:01 EST 2011


On Tue, Jan 4, 2011 at 1:12 PM, Yury V. Zaytsev <yury at shurup.com> wrote:
> Hi!
>
> On Tue, 2011-01-04 at 11:22 +0200, Dmitrey wrote:
>
>> ralg (as well as gsubg) is adjusted very well to handle problems with
>> restricted domains.
>
> gsubg is listed as "unconstrained" at [2], while [3] says that
> constraints handling code is immature and will most likely fail.
>
> I still have no excuse for not trying r-alg out, which I will hopefully
> do at some point, after exploring the performance of the nlopt solvers.
>
>> For your problem (nonlinear local minimization) for a solver
>> efficiency matter numbers of: variables, box constraints, linear
>> eq/ineq constraints, nonlinear eq/ineq constraints. Another one very
>> essential issue is having some gradients of active constraints forming
>> linear system close to singular (then lots of NLP solvers will fail to
>> solve it). Are you capable of taking into account all these
>> parameters? I guess it's easy to change solver from the "stuff"
>> instead and try what is better for the nlp involved.
>
> I am certainly not, because this is pretty much my first encounter with
> optimization problems. Last week I didn't even have a clue on what
> classes of problems there are, and which algorithms are mostly used to
> solve these problems.
>
> That's exactly why I would like to see descriptions along the lines of
> [1], because random trials of the algorithms that are bundled with SciPy
> (bfgs, tnc, ds, etc.) were largely unsuccessful.
>
> At some point, BFGS was converging fast and nice, until I found out that
> for different starting points it converges to completely different
> solutions. Then I tried DS which proved to be much more robust, but
> extremely slow and was all the time getting stuck when the simplex had
> to get through a "needle's eye" in the parameter space.

"Welcome to the world of messy optimization problems"

If you have local maxima, and your optimization problem is not well
behaved, then you might need a global optimizer.
I think also the algorithms in nlopt will get stuck at local optima,
since all these optimizers only use local information.

Josef

>
> So I assume that the procedure for choosing the right method should be
> more of an educated guess, than random trials. Maybe I should re-post
> the message more specific to my problem to this list.
>
>> Moreover, would I provide some comparison info in the way you would
>> like, after new release of any of the openopt-connected solver I would
>> have to perform the comparison over and over again. Thus I don't see
>> any reason to perform it.
>
> Writing reasonably good documentation is not fun. Also, it takes a lot
> of time that you can possibly spend on something else, which you might
> find more interesting or useful. On the other hand, it makes users
> happy, especially those that are unfamiliar with the domain.
>
> What you find more important is entirely up to you, and please note that
> I don't blame you for the decisions that you make. However, I find that
> statements such as "I see no reason to perform it" are provocative :-)
>
> If you want to put yourself in your users' shoes, just compare [1] and
> [2]. Do you honestly think that there is no reason for coming up with a
> similar list (short description of the algorithm, paper references and
> personal recommendations for initial orientation) for OpenOpt? I think
> it would certainly benefit the project if only you had time to do it.
>
> [1]: http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms
> [2]: http://openopt.org/NLP
> [3]: http://openopt.org/gsubg
>
> --
> Sincerely yours,
> Yury V. Zaytsev
>
> _______________________________________________
> SciPy-User mailing list
> SciPy-User at scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>



More information about the SciPy-User mailing list