BoundedBroydenFletcherGoldfarbShanno Class |
Namespace: Accord.Math.Optimization
public class BoundedBroydenFletcherGoldfarbShanno : BaseGradientOptimizationMethod, IGradientOptimizationMethod, IOptimizationMethod, IOptimizationMethod<double[], double>, IGradientOptimizationMethod<double[], double>, IFunctionOptimizationMethod<double[], double>, IOptimizationMethod<BoundedBroydenFletcherGoldfarbShannoStatus>, IOptimizationMethod<double[], double, BoundedBroydenFletcherGoldfarbShannoStatus>, ISupportsCancellation
The BoundedBroydenFletcherGoldfarbShanno type exposes the following members.
Name | Description | |
---|---|---|
BoundedBroydenFletcherGoldfarbShanno(Int32) |
Creates a new instance of the L-BFGS optimization algorithm.
| |
BoundedBroydenFletcherGoldfarbShanno(Int32, FuncDouble, Double, FuncDouble, Double) |
Creates a new instance of the L-BFGS optimization algorithm.
|
Name | Description | |
---|---|---|
Corrections |
Gets or sets the number of corrections used in the L-BFGS
update. Recommended values are between 3 and 7. Default is 5.
| |
Evaluations | ||
Function |
Gets or sets the function to be optimized.
(Inherited from BaseOptimizationMethod.) | |
FunctionTolerance |
Gets or sets the accuracy with which the solution
is to be found. Default value is 1e5. Smaller values
up until zero result in higher accuracy.
| |
Gradient |
Gets or sets a function returning the gradient
vector of the function to be optimized for a
given value of its free parameters.
(Inherited from BaseGradientOptimizationMethod.) | |
GradientTolerance |
Gets or sets a tolerance value when detecting convergence
of the gradient vector steps. Default is 0.
| |
Iterations | ||
LowerBounds |
Gets or sets the lower bounds of the interval
in which the solution must be found.
| |
MaxIterations |
Gets or sets the maximum number of iterations
to be performed during optimization. Default
is 0 (iterate until convergence).
| |
NumberOfVariables |
Gets the number of variables (free parameters)
in the optimization problem.
(Inherited from BaseOptimizationMethod.) | |
Solution |
Gets the current solution found, the values of
the parameters which optimizes the function.
(Inherited from BaseOptimizationMethod.) | |
Status | ||
Token |
Gets or sets a cancellation token that can be used to
stop the learning algorithm while it is running.
(Inherited from BaseOptimizationMethod.) | |
UpperBounds |
Gets or sets the upper bounds of the interval
in which the solution must be found.
| |
Value |
Gets the output of the function at the current Solution.
(Inherited from BaseOptimizationMethod.) |
Name | Description | |
---|---|---|
Equals | Determines whether the specified object is equal to the current object. (Inherited from Object.) | |
Finalize | Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. (Inherited from Object.) | |
GetHashCode | Serves as the default hash function. (Inherited from Object.) | |
GetType | Gets the Type of the current instance. (Inherited from Object.) | |
Maximize |
Finds the maximum value of a function. The solution vector
will be made available at the Solution property.
(Inherited from BaseGradientOptimizationMethod.) | |
Maximize(Double) |
Finds the maximum value of a function. The solution vector
will be made available at the Solution property.
(Inherited from BaseOptimizationMethod.) | |
MemberwiseClone | Creates a shallow copy of the current Object. (Inherited from Object.) | |
Minimize |
Finds the minimum value of a function. The solution vector
will be made available at the Solution property.
(Inherited from BaseGradientOptimizationMethod.) | |
Minimize(Double) |
Finds the minimum value of a function. The solution vector
will be made available at the Solution property.
(Inherited from BaseOptimizationMethod.) | |
OnNumberOfVariablesChanged |
Called when the NumberOfVariables property has changed.
(Inherited from BaseOptimizationMethod.) | |
Optimize |
Implements the actual optimization algorithm. This
method should try to minimize the objective function.
(Overrides BaseOptimizationMethodOptimize.) | |
ToString | Returns a string that represents the current object. (Inherited from Object.) |
Name | Description | |
---|---|---|
HasMethod |
Checks whether an object implements a method with the given name.
(Defined by ExtensionMethods.) | |
IsEqual |
Compares two objects for equality, performing an elementwise
comparison if the elements are vectors or matrices.
(Defined by Matrix.) | |
To(Type) | Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.) | |
ToT | Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.) |
The L-BFGS algorithm is a member of the broad family of quasi-Newton optimization methods. L-BFGS stands for 'Limited memory BFGS'. Indeed, L-BFGS uses a limited memory variation of the Broyden–Fletcher–Goldfarb–Shanno (BFGS) update to approximate the inverse Hessian matrix (denoted by Hk). Unlike the original BFGS method which stores a dense approximation, L-BFGS stores only a few vectors that represent the approximation implicitly. Due to its moderate memory requirement, L-BFGS method is particularly well suited for optimization problems with a large number of variables.
L-BFGS never explicitly forms or stores Hk. Instead, it maintains a history of the past m updates of the position x and gradient g, where generally the history mcan be short, often less than 10. These updates are used to implicitly do operations requiring the Hk-vector product.
The framework implementation of this method is based on the original FORTRAN source code by Jorge Nocedal (see references below). The original FORTRAN source code of L-BFGS (for unconstrained problems) is available at http://www.netlib.org/opt/lbfgs_um.shar and had been made available under the public domain.
References:
The following example shows the basic usage of the L-BFGS solver to find the minimum of a function specifying its function and gradient.
// Suppose we would like to find the minimum of the function // // f(x,y) = -exp{-(x-1)²} - exp{-(y-2)²/2} // // First we need write down the function either as a named // method, an anonymous method or as a lambda function: Func<double[], double> f = (x) => -Math.Exp(-Math.Pow(x[0] - 1, 2)) - Math.Exp(-0.5 * Math.Pow(x[1] - 2, 2)); // Now, we need to write its gradient, which is just the // vector of first partial derivatives del_f / del_x, as: // // g(x,y) = { del f / del x, del f / del y } // Func<double[], double[]> g = (x) => new double[] { // df/dx = {-2 e^(- (x-1)^2) (x-1)} 2 * Math.Exp(-Math.Pow(x[0] - 1, 2)) * (x[0] - 1), // df/dy = {- e^(-1/2 (y-2)^2) (y-2)} Math.Exp(-0.5 * Math.Pow(x[1] - 2, 2)) * (x[1] - 2) }; // Finally, we can create the L-BFGS solver, passing the functions as arguments var lbfgs = new BroydenFletcherGoldfarbShanno(numberOfVariables: 2, function: f, gradient: g); // And then minimize the function: bool success = lbfgs.Minimize(); double minValue = lbfgs.Value; double[] solution = lbfgs.Solution; // The resultant minimum value should be -2, and the solution // vector should be { 1.0, 2.0 }. The answer can be checked on // Wolfram Alpha by clicking the following the link: // http://www.wolframalpha.com/input/?i=maximize+%28exp%28-%28x-1%29%C2%B2%29+%2B+exp%28-%28y-2%29%C2%B2%2F2%29%29