[Scipy-svn] r6230 - trunk/scipy/optimize

scipy-svn at scipy.org scipy-svn at scipy.org
Wed Feb 10 02:45:00 EST 2010


Author: stefan
Date: 2010-02-10 01:45:00 -0600 (Wed, 10 Feb 2010)
New Revision: 6230

Modified:
   trunk/scipy/optimize/lbfgsb.py
Log:
DOC: Reformat lbfgsb docs.

Modified: trunk/scipy/optimize/lbfgsb.py
===================================================================
--- trunk/scipy/optimize/lbfgsb.py	2010-02-10 07:44:24 UTC (rev 6229)
+++ trunk/scipy/optimize/lbfgsb.py	2010-02-10 07:45:00 UTC (rev 6230)
@@ -40,57 +40,58 @@
     """
     Minimize a function func using the L-BFGS-B algorithm.
 
-    Arguments:
+    Parameters
+    ----------
+    func : callable f(x, *args)
+        Function to minimise.
+    x0 : ndarray
+        Initial guess.
+    fprime : callable fprime(x, *args)
+        The gradient of `func`.  If None, then `func` returns the function
+        value and the gradient (``f, g = func(x, *args)``), unless
+        `approx_grad` is True in which case `func` returns only ``f``.
+    args : tuple
+        Arguments to pass to `func` and `fprime`.
+    approx_grad : bool
+        Whether to approximate the gradient numerically (in which case
+        `func` returns only the function value).
+    bounds : list
+        ``(min, max)`` pairs for each element in ``x``, defining
+        the bounds on that parameter. Use None for one of ``min`` or
+        ``max`` when there is no bound in that direction.
+    m : int
+        The maximum number of variable metric corrections
+        used to define the limited memory matrix. (The limited memory BFGS
+        method does not store the full hessian but uses this many terms in an
+        approximation to it.)
+    factr : float
+        The iteration stops when
+        ``(f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= factr * eps``,
+        where ``eps`` is the machine precision, which is automatically
+        generated by the code. Typical values for `factr` are: 1e12 for
+        low accuracy; 1e7 for moderate accuracy; 10.0 for extremely
+        high accuracy.
+    pgtol : float
+        The iteration will stop when
+        ``max{|proj g_i | i = 1, ..., n} <= pgtol``
+        where ``pg_i`` is the i-th component of the projected gradient.
+    epsilon : float
+        Step size used when `approx_grad` is True, for numerically
+        calculating the gradient
+    iprint : int
+        Controls the frequency of output. ``iprint < 0`` means no output.
+    maxfun : int
+        Maximum number of function evaluations.
 
-    func    -- function to minimize. Called as func(x, *args)
+    Returns
+    -------
+    x : ndarray
+        Estimated position of the minimum.
+    f : float
+        Value of `func` at the minimum.
+    d : dict
+        Information dictionary.
 
-    x0      -- initial guess to minimum
-
-    fprime  -- gradient of func. If None, then func returns the function
-               value and the gradient ( f, g = func(x, *args) ), unless
-               approx_grad is True then func returns only f.
-               Called as fprime(x, *args)
-
-    args    -- arguments to pass to function
-
-    approx_grad -- if true, approximate the gradient numerically and func returns
-                   only function value.
-
-    bounds  -- a list of (min, max) pairs for each element in x, defining
-               the bounds on that parameter. Use None for one of min or max
-               when there is no bound in that direction
-
-    m       -- the maximum number of variable metric corrections
-               used to define the limited memory matrix. (the limited memory BFGS
-               method does not store the full hessian but uses this many terms in an
-               approximation to it).
-
-    factr   -- The iteration stops when
-               (f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= factr*epsmch
-
-               where epsmch is the machine precision, which is automatically
-               generated by the code. Typical values for factr: 1e12 for
-               low accuracy; 1e7 for moderate accuracy; 10.0 for extremely
-               high accuracy.
-
-    pgtol   -- The iteration will stop when
-                  max{|proj g_i | i = 1, ..., n} <= pgtol
-               where pg_i is the ith component of the projected gradient.
-
-    epsilon -- step size used when approx_grad is true, for numerically
-               calculating the gradient
-
-    iprint  -- controls the frequency of output. <0 means no output.
-
-    maxfun  -- maximum number of function evaluations.
-
-
-    Returns:
-    x, f, d = fmin_lbfgs_b(func, x0, ...)
-
-    x -- position of the minimum
-    f -- value of func at the minimum
-    d -- dictionary of information from routine
         d['warnflag'] is
             0 if converged,
             1 if too many function evaluations,
@@ -99,17 +100,20 @@
         d['funcalls'] is the number of function calls made.
 
 
-   License of L-BFGS-B (Fortran code)
-   ==================================
+   Notes
+   -----
 
-   The version included here (in fortran code) is 2.1 (released in 1997). It was
-   written by Ciyou Zhu, Richard Byrd, and Jorge Nocedal <nocedal at ece.nwu.edu>. It
-   carries the following condition for use:
+   License of L-BFGS-B (Fortran code):
 
-   This software is freely available, but we expect that all publications
-   describing  work using this software , or all commercial products using it,
-   quote at least one of the references given below.
+   The version included here (in fortran code) is 2.1 (released in
+   1997). It was written by Ciyou Zhu, Richard Byrd, and Jorge Nocedal
+   <nocedal at ece.nwu.edu>. It carries the following condition for use:
 
+   This software is freely available, but we expect that all
+   publications describing work using this software, or all
+   commercial products using it, quote at least one of the references
+   given below.
+
    References
      * R. H. Byrd, P. Lu and J. Nocedal. A Limited Memory Algorithm for Bound
        Constrained Optimization, (1995), SIAM Journal on Scientific and
@@ -118,27 +122,6 @@
        FORTRAN routines for large scale bound constrained optimization (1997),
        ACM Transactions on Mathematical Software, Vol 23, Num. 4, pp. 550 - 560.
 
-    See also:
-        fmin, fmin_powell, fmin_cg,
-               fmin_bfgs, fmin_ncg -- multivariate local optimizers
-        leastsq -- nonlinear least squares minimizer
-
-        fmin_l_bfgs_b, fmin_tnc,
-               fmin_cobyla -- constrained multivariate optimizers
-
-        anneal, brute -- global optimizers
-
-        fminbound, brent, golden, bracket -- local scalar minimizers
-
-        fsolve -- n-dimensional root-finding
-
-        brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding
-
-        fixed_point -- scalar fixed-point finder
-
-        OpenOpt -- a tool which offers a unified syntax to call this and
-         other solvers with possibility of automatic differentiation
-
     """
     n = len(x0)
 




More information about the Scipy-svn mailing list