BroydenFletcherGoldfarbShanno Class |
Namespace: Accord.Math.Optimization
public class BroydenFletcherGoldfarbShanno : BaseGradientOptimizationMethod, IGradientOptimizationMethod, IOptimizationMethod, IOptimizationMethod<double[], double>, IGradientOptimizationMethod<double[], double>, IFunctionOptimizationMethod<double[], double>, IOptimizationMethod<BroydenFletcherGoldfarbShannoStatus>, IOptimizationMethod<double[], double, BroydenFletcherGoldfarbShannoStatus>
The BroydenFletcherGoldfarbShanno type exposes the following members.
Name | Description | |
---|---|---|
BroydenFletcherGoldfarbShanno |
Creates a new instance of the L-BFGS optimization algorithm.
| |
BroydenFletcherGoldfarbShanno(Int32) |
Creates a new instance of the L-BFGS optimization algorithm.
| |
BroydenFletcherGoldfarbShanno(NonlinearObjectiveFunction) |
Creates a new instance of the L-BFGS optimization algorithm.
| |
BroydenFletcherGoldfarbShanno(Int32, FuncDouble, Double, FuncDouble, Double) |
Creates a new instance of the L-BFGS optimization algorithm.
|
Name | Description | |
---|---|---|
Corrections |
The number of corrections to approximate the inverse Hessian matrix.
Default is 6. Values less than 3 are not recommended. Large values
will result in excessive computing time.
| |
Delta |
Delta for convergence test.
| |
Epsilon |
Epsilon for convergence test.
| |
Function |
Gets or sets the function to be optimized.
(Inherited from BaseOptimizationMethod.) | |
FunctionTolerance |
The machine precision for floating-point values.
| |
Gradient |
Gets or sets a function returning the gradient
vector of the function to be optimized for a
given value of its free parameters.
(Inherited from BaseGradientOptimizationMethod.) | |
GradientTolerance |
A parameter to control the accuracy of the line search routine.
| |
LineSearch |
The line search algorithm.
| |
MaxIterations |
The maximum number of iterations.
| |
MaxLineSearch |
The maximum number of trials for the line search.
| |
MaxStep |
The maximum step of the line search.
| |
MinStep |
The minimum step of the line search routine.
| |
NumberOfVariables |
Gets the number of variables (free parameters)
in the optimization problem.
(Inherited from BaseOptimizationMethod.) | |
OrthantwiseC |
Coefficient for the L1 norm of variables.
| |
OrthantwiseEnd |
End index for computing L1 norm of the variables.
| |
OrthantwiseStart |
Start index for computing L1 norm of the variables.
| |
ParameterTolerance |
A parameter to control the accuracy of the line search routine. The default
value is 1e-4. This parameter should be greater than zero and smaller
than 0.5.
| |
Past |
Distance for delta-based convergence test.
| |
Solution |
Gets the current solution found, the values of
the parameters which optimizes the function.
(Inherited from BaseOptimizationMethod.) | |
Status | ||
Token |
Gets or sets a cancellation token that can be used to
stop the learning algorithm while it is running.
(Inherited from BaseOptimizationMethod.) | |
Value |
Gets the output of the function at the current Solution.
(Inherited from BaseOptimizationMethod.) | |
Wolfe |
A coefficient for the Wolfe condition.
|
Name | Description | |
---|---|---|
Equals | Determines whether the specified object is equal to the current object. (Inherited from Object.) | |
Finalize | Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. (Inherited from Object.) | |
GetHashCode | Serves as the default hash function. (Inherited from Object.) | |
GetType | Gets the Type of the current instance. (Inherited from Object.) | |
Maximize |
Finds the maximum value of a function. The solution vector
will be made available at the Solution property.
(Inherited from BaseGradientOptimizationMethod.) | |
Maximize(Double) |
Finds the maximum value of a function. The solution vector
will be made available at the Solution property.
(Inherited from BaseOptimizationMethod.) | |
MemberwiseClone | Creates a shallow copy of the current Object. (Inherited from Object.) | |
Minimize |
Finds the minimum value of a function. The solution vector
will be made available at the Solution property.
(Inherited from BaseGradientOptimizationMethod.) | |
Minimize(Double) |
Finds the minimum value of a function. The solution vector
will be made available at the Solution property.
(Inherited from BaseOptimizationMethod.) | |
OnNumberOfVariablesChanged |
Called when the NumberOfVariables property has changed.
(Overrides BaseOptimizationMethodOnNumberOfVariablesChanged(Int32).) | |
Optimize |
Implements the actual optimization algorithm. This
method should try to minimize the objective function.
(Overrides BaseOptimizationMethodOptimize.) | |
ToString | Returns a string that represents the current object. (Inherited from Object.) |
Name | Description | |
---|---|---|
HasMethod |
Checks whether an object implements a method with the given name.
(Defined by ExtensionMethods.) | |
IsEqual |
Compares two objects for equality, performing an elementwise
comparison if the elements are vectors or matrices.
(Defined by Matrix.) | |
To(Type) | Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.) | |
ToT | Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.) |
The L-BFGS algorithm is a member of the broad family of quasi-Newton optimization methods. L-BFGS stands for 'Limited memory BFGS'. Indeed, L-BFGS uses a limited memory variation of the Broyden–Fletcher–Goldfarb–Shanno (BFGS) update to approximate the inverse Hessian matrix (denoted by Hk). Unlike the original BFGS method which stores a dense approximation, L-BFGS stores only a few vectors that represent the approximation implicitly. Due to its moderate memory requirement, L-BFGS method is particularly well suited for optimization problems with a large number of variables.
L-BFGS never explicitly forms or stores Hk. Instead, it maintains a history of the past m updates of the position x and gradient g, where generally the history mcan be short, often less than 10. These updates are used to implicitly do operations requiring the Hk-vector product.
The framework implementation of this method is based on the original FORTRAN source code by Jorge Nocedal (see references below). The original FORTRAN source code of L-BFGS (for unconstrained problems) is available at http://www.netlib.org/opt/lbfgs_um.shar and had been made available under the public domain.
References:
The following example shows the basic usage of the L-BFGS solver to find the minimum of a function specifying its function and gradient.
// Suppose we would like to find the minimum of the function // // f(x,y) = -exp{-(x-1)²} - exp{-(y-2)²/2} // // First we need write down the function either as a named // method, an anonymous method or as a lambda function: Func<double[], double> f = (x) => -Math.Exp(-Math.Pow(x[0] - 1, 2)) - Math.Exp(-0.5 * Math.Pow(x[1] - 2, 2)); // Now, we need to write its gradient, which is just the // vector of first partial derivatives del_f / del_x, as: // // g(x,y) = { del f / del x, del f / del y } // Func<double[], double[]> g = (x) => new double[] { // df/dx = {-2 e^(- (x-1)^2) (x-1)} 2 * Math.Exp(-Math.Pow(x[0] - 1, 2)) * (x[0] - 1), // df/dy = {- e^(-1/2 (y-2)^2) (y-2)} Math.Exp(-0.5 * Math.Pow(x[1] - 2, 2)) * (x[1] - 2) }; // Finally, we can create the L-BFGS solver, passing the functions as arguments var lbfgs = new BroydenFletcherGoldfarbShanno(numberOfVariables: 2, function: f, gradient: g); // And then minimize the function: bool success = lbfgs.Minimize(); double minValue = lbfgs.Value; double[] solution = lbfgs.Solution; // The resultant minimum value should be -2, and the solution // vector should be { 1.0, 2.0 }. The answer can be checked on // Wolfram Alpha by clicking the following the link: // http://www.wolframalpha.com/input/?i=maximize+%28exp%28-%28x-1%29%C2%B2%29+%2B+exp%28-%28y-2%29%C2%B2%2F2%29%29