Conjugate gradients minimizer

Carl Banks imbosol-1048743253 at aerojockey.com
Thu Mar 27 01:16:50 EST 2003


Gareth McCaughan wrote:
> Carl Banks wrote:
> 
>> But let me recommend something else.  You have only a thousand
>> variables, which is really not a lot.  Conjugate gradient is best
>> suited for systems of millions of variables, because it doesn't have
>> to store a matrix.  It doesn't scale down well.
>> 
>> So, unless you have reason to believe conjugate-gradient is specially
>> suited to this problem (indeed, I have heard of an atomic structure
>> problem that was like that), or if you plan to scale up to DNA
>> crystals someday, use the BFGS method instead.
> 
> I concur. If it turns out that the overhead from BFGS
> *is* too much for some reason, you might also consider
> something intermediate between conjugate gradient and
> BFGS, such as L-BFGS.
> 
> But it's not true that BFGS requires inverting a matrix
> at each step. The matrix you maintain is an approximation
> to the *inverse* of the Hessian, and no inversion is
> necessary.

Oh, silly me.  I probably said that but just intended to mean "you
have to store an NxN matrix."  I never heard of L-BFGS, though.  I
admit I got my optimization knowledge from 20-year-old texts.


-- 
CARL BANKS




More information about the Python-list mailing list