BroydenFletcherGoldfarbShanno Class 
Namespace: Accord.Math.Optimization
public class BroydenFletcherGoldfarbShanno : BaseGradientOptimizationMethod, IGradientOptimizationMethod, IOptimizationMethod, IOptimizationMethod<BroydenFletcherGoldfarbShannoStatus>
The BroydenFletcherGoldfarbShanno type exposes the following members.
Name  Description  

BroydenFletcherGoldfarbShanno(Int32) 
Creates a new instance of the LBFGS optimization algorithm.
 
BroydenFletcherGoldfarbShanno(NonlinearObjectiveFunction) 
Creates a new instance of the LBFGS optimization algorithm.
 
BroydenFletcherGoldfarbShanno(Int32, FuncDouble, Double, FuncDouble, Double) 
Creates a new instance of the LBFGS optimization algorithm.

Name  Description  

Corrections 
The number of corrections to approximate the inverse Hessian matrix.
Default is 6. Values less than 3 are not recommended. Large values
will result in excessive computing time.
 
Delta 
Delta for convergence test.
 
Epsilon 
Epsilon for convergence test.
 
Function 
Gets or sets the function to be optimized.
(Inherited from BaseOptimizationMethod.)  
FunctionTolerance 
The machine precision for floatingpoint values.
 
Gradient 
Gets or sets a function returning the gradient
vector of the function to be optimized for a
given value of its free parameters.
(Inherited from BaseGradientOptimizationMethod.)  
GradientTolerance 
A parameter to control the accuracy of the line search routine.
 
LineSearch 
The line search algorithm.
 
MaxIterations 
The maximum number of iterations.
 
MaxLineSearch 
The maximum number of trials for the line search.
 
MaxStep 
The maximum step of the line search.
 
MinStep 
The minimum step of the line search routine.
 
NumberOfVariables 
Gets the number of variables (free parameters)
in the optimization problem.
(Inherited from BaseOptimizationMethod.)  
OrthantwiseC 
Coefficient for the L1 norm of variables.
 
OrthantwiseEnd 
End index for computing L1 norm of the variables.
 
OrthantwiseStart 
Start index for computing L1 norm of the variables.
 
ParameterTolerance 
A parameter to control the accuracy of the line search routine. The default
value is 1e4. This parameter should be greater than zero and smaller
than 0.5.
 
Past 
Distance for deltabased convergence test.
 
Solution 
Gets the current solution found, the values of
the parameters which optimizes the function.
(Inherited from BaseOptimizationMethod.)  
Status  
Token 
Gets or sets a cancellation token that can be used to
stop the learning algorithm while it is running.
(Inherited from BaseGradientOptimizationMethod.)  
Value 
Gets the output of the function at the current Solution.
(Inherited from BaseOptimizationMethod.)  
Wolfe 
A coefficient for the Wolfe condition.

Name  Description  

Equals  Determines whether the specified object is equal to the current object. (Inherited from Object.)  
Finalize  Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. (Inherited from Object.)  
GetHashCode  Serves as the default hash function. (Inherited from Object.)  
GetType  Gets the Type of the current instance. (Inherited from Object.)  
Maximize 
Finds the maximum value of a function. The solution vector
will be made available at the Solution property.
(Inherited from BaseGradientOptimizationMethod.)  
Maximize(Double) 
Finds the maximum value of a function. The solution vector
will be made available at the Solution property.
(Inherited from BaseOptimizationMethod.)  
MemberwiseClone  Creates a shallow copy of the current Object. (Inherited from Object.)  
Minimize 
Finds the minimum value of a function. The solution vector
will be made available at the Solution property.
(Inherited from BaseGradientOptimizationMethod.)  
Minimize(Double) 
Finds the minimum value of a function. The solution vector
will be made available at the Solution property.
(Inherited from BaseOptimizationMethod.)  
Optimize 
Implements the actual optimization algorithm. This
method should try to minimize the objective function.
(Overrides BaseOptimizationMethodOptimize.)  
ToString  Returns a string that represents the current object. (Inherited from Object.) 
Name  Description  

HasMethod 
Checks whether an object implements a method with the given name.
(Defined by ExtensionMethods.)  
ToT  Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.)  
ToT  Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by Matrix.) 
The LBFGS algorithm is a member of the broad family of quasiNewton optimization methods. LBFGS stands for 'Limited memory BFGS'. Indeed, LBFGS uses a limited memory variation of the Broyden–Fletcher–Goldfarb–Shanno (BFGS) update to approximate the inverse Hessian matrix (denoted by Hk). Unlike the original BFGS method which stores a dense approximation, LBFGS stores only a few vectors that represent the approximation implicitly. Due to its moderate memory requirement, LBFGS method is particularly well suited for optimization problems with a large number of variables.
LBFGS never explicitly forms or stores Hk. Instead, it maintains a history of the past m updates of the position x and gradient g, where generally the history mcan be short, often less than 10. These updates are used to implicitly do operations requiring the Hkvector product.
The framework implementation of this method is based on the original FORTRAN source code by Jorge Nocedal (see references below). The original FORTRAN source code of LBFGS (for unconstrained problems) is available at http://www.netlib.org/opt/lbfgs_um.shar and had been made available under the public domain.
References:
The following example shows the basic usage of the LBFGS solver to find the minimum of a function specifying its function and gradient.
// Suppose we would like to find the minimum of the function // // f(x,y) = exp{(x1)²}  exp{(y2)²/2} // // First we need write down the function either as a named // method, an anonymous method or as a lambda function: Func<double[], double> f = (x) => Math.Exp(Math.Pow(x[0]  1, 2))  Math.Exp(0.5 * Math.Pow(x[1]  2, 2)); // Now, we need to write its gradient, which is just the // vector of first partial derivatives del_f / del_x, as: // // g(x,y) = { del f / del x, del f / del y } // Func<double[], double[]> g = (x) => new double[] { // df/dx = {2 e^( (x1)^2) (x1)} 2 * Math.Exp(Math.Pow(x[0]  1, 2)) * (x[0]  1), // df/dy = { e^(1/2 (y2)^2) (y2)} Math.Exp(0.5 * Math.Pow(x[1]  2, 2)) * (x[1]  2) }; // Finally, we can create the LBFGS solver, passing the functions as arguments var lbfgs = new BroydenFletcherGoldfarbShanno(numberOfVariables: 2, function: f, gradient: g); // And then minimize the function: bool success = lbfgs.Minimize(); double minValue = lbfgs.Value; double[] solution = lbfgs.Solution; // The resultant minimum value should be 2, and the solution // vector should be { 1.0, 2.0 }. The answer can be checked on // Wolfram Alpha by clicking the following the link: // http://www.wolframalpha.com/input/?i=maximize+%28exp%28%28x1%29%C2%B2%29+%2B+exp%28%28y2%29%C2%B2%2F2%29%29