LevenbergMarquardt Class |
Namespace: Accord.Math.Optimization
public class LevenbergMarquardt : BaseLeastSquaresMethod, ILeastSquaresMethod, IConvergenceLearning
The LevenbergMarquardt type exposes the following members.
Name | Description | |
---|---|---|
LevenbergMarquardt |
Initializes a new instance of the LevenbergMarquardt class.
| |
LevenbergMarquardt(Int32) |
Initializes a new instance of the LevenbergMarquardt class.
|
Name | Description | |
---|---|---|
Adjustment |
Learning rate adjustment.
| |
Blocks |
Gets or sets the number of blocks to divide the
Jacobian matrix in the Hessian calculation to
preserve memory. Default is 1.
| |
Convergence |
Gets or sets the convergence verification method.
(Inherited from BaseLeastSquaresMethod.) | |
CurrentIteration |
Gets the current iteration number.
(Inherited from BaseLeastSquaresMethod.) | |
Function |
Gets or sets a parameterized model function mapping input vectors
into output values, whose optimum parameters must be found.
(Inherited from BaseLeastSquaresMethod.) | |
Gradient |
Gets or sets a function that computes the gradient vector in respect
to the function parameters, given a set of input and output values.
(Inherited from BaseLeastSquaresMethod.) | |
HasConverged |
Gets whether the algorithm has converged.
(Inherited from BaseLeastSquaresMethod.) | |
Hessian |
Gets the approximate Hessian matrix of second derivatives
generated in the last algorithm iteration. The Hessian is
stored in the upper triangular part of this matrix. See
remarks for details.
| |
Iterations | Obsolete.
Please use MaxIterations instead.
(Inherited from BaseLeastSquaresMethod.) | |
LearningRate |
Levenberg's damping factor, also known as lambda.
| |
MaxIterations |
Gets or sets the maximum number of iterations
performed by the iterative algorithm. Default
is 100.
(Inherited from BaseLeastSquaresMethod.) | |
NumberOfParameters |
Gets the number of variables (free parameters) in the optimization problem.
(Inherited from BaseLeastSquaresMethod.) | |
NumberOfVariables | Obsolete.
Gets the number of variables (free parameters) in the optimization problem.
(Inherited from BaseLeastSquaresMethod.) | |
ParallelOptions |
Gets or sets the parallelization options for this algorithm.
(Inherited from ParallelLearningBase.) | |
Solution |
Gets the solution found, the values of the parameters which
optimizes the function, in a least squares sense.
(Inherited from BaseLeastSquaresMethod.) | |
StandardErrors |
Gets standard error for each parameter in the solution.
| |
Token |
Gets or sets a cancellation token that can be used
to cancel the algorithm while it is running.
(Inherited from ParallelLearningBase.) | |
Tolerance |
Gets or sets the maximum relative change in the watched value
after an iteration of the algorithm used to detect convergence.
Default is zero.
(Inherited from BaseLeastSquaresMethod.) | |
Value |
Gets the value at the solution found. This should be
the minimum value found for the objective function.
(Inherited from BaseLeastSquaresMethod.) |
Name | Description | |
---|---|---|
ComputeError |
Compute model error for a given data set.
(Inherited from BaseLeastSquaresMethod.) | |
Equals | Determines whether the specified object is equal to the current object. (Inherited from Object.) | |
Finalize | Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. (Inherited from Object.) | |
GetHashCode | Serves as the default hash function. (Inherited from Object.) | |
GetType | Gets the Type of the current instance. (Inherited from Object.) | |
Initialize |
This method should be implemented by child classes to initialize
their fields once the NumberOfParameters is known.
(Overrides BaseLeastSquaresMethodInitialize.) | |
MemberwiseClone | Creates a shallow copy of the current Object. (Inherited from Object.) | |
Minimize |
Attempts to find the best values for the parameter vector
minimizing the discrepancy between the generated outputs
and the expected outputs for a given set of input data.
| |
ToString | Returns a string that represents the current object. (Inherited from Object.) |
Name | Description | |
---|---|---|
HasMethod |
Checks whether an object implements a method with the given name.
(Defined by ExtensionMethods.) | |
IsEqual |
Compares two objects for equality, performing an elementwise
comparison if the elements are vectors or matrices.
(Defined by Matrix.) | |
To(Type) | Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.) | |
ToT | Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.) |
While it is possible to use the LevenbergMarquardt class as a standalone method for solving least squares problems, this class is intended to be used as a strategy for NonlinearLestSquares, as shown in the example below:
// Suppose we would like to map the continuous values in the // second column to the integer values in the first column. double[,] data = { { -40, -21142.1111111111 }, { -30, -21330.1111111111 }, { -20, -12036.1111111111 }, { -10, 7255.3888888889 }, { 0, 32474.8888888889 }, { 10, 32474.8888888889 }, { 20, 9060.8888888889 }, { 30, -11628.1111111111 }, { 40, -15129.6111111111 }, }; // Extract inputs and outputs double[][] inputs = data.GetColumn(0).ToJagged(); double[] outputs = data.GetColumn(1); // Create a Nonlinear regression using var nls = new NonlinearLeastSquares() { NumberOfParameters = 3, // Initialize to some random values StartValues = new[] { 4.2, 0.3, 1 }, // Let's assume a quadratic model function: ax² + bx + c Function = (w, x) => w[0] * x[0] * x[0] + w[1] * x[0] + w[2], // Derivative in respect to the weights: Gradient = (w, x, r) => { r[0] = w[0]* w[0]; // w.r.t a: a² // https://www.wolframalpha.com/input/?i=diff+ax²+%2B+bx+%2B+c+w.r.t.+a r[1] = w[0]; // w.r.t b: b // https://www.wolframalpha.com/input/?i=diff+ax²+%2B+bx+%2B+c+w.r.t.+b r[2] = 1; // w.r.t c: 1 // https://www.wolframalpha.com/input/?i=diff+ax²+%2B+bx+%2B+c+w.r.t.+c }, Algorithm = new LevenbergMarquardt() { MaxIterations = 100, Tolerance = 0 } }; var regression = nls.Learn(inputs, outputs); // Use the function to compute the input values double[] predict = regression.Transform(inputs);
However, as mentioned above it is also possible to use LevenbergMarquardt as a standalone class, as shown in the example below:
// Example from https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm // In this example, the Gauss–Newton algorithm will be used to fit a model to // some data by minimizing the sum of squares of errors between the data and // model's predictions. // In a biology experiment studying the relation between substrate concentration [S] // and reaction rate in an enzyme-mediated reaction, the data in the following table // were obtained: double[][] inputs = Jagged.ColumnVector(new [] { 0.03, 0.1947, 0.425, 0.626, 1.253, 2.500, 3.740 }); double[] outputs = new[] { 0.05, 0.127, 0.094, 0.2122, 0.2729, 0.2665, 0.3317 }; // It is desired to find a curve (model function) of the form // // rate = \frac{V_{max}[S]}{K_M+[S]} // // that fits best the data in the least squares sense, with the parameters V_max // and K_M to be determined. Let's start by writing model equation below: LeastSquaresFunction function = (double[] parameters, double[] input) => { return (parameters[0] * input[0]) / (parameters[1] + input[0]); }; // Now, we can either write the gradient function of the model by hand or let // the model compute it automatically using Newton's finite differences method: LeastSquaresGradientFunction gradient = (double[] parameters, double[] input, double[] result) => { result[0] = -((-input[0]) / (parameters[1] + input[0])); result[1] = -((parameters[0] * input[0]) / Math.Pow(parameters[1] + input[0], 2)); }; // Create a new Levenberg-Marquardt algorithm var gn = new LevenbergMarquardt(parameters: 2) { Function = function, Gradient = gradient, Solution = new[] { 0.9, 0.2 } // starting from b1 = 0.9 and b2 = 0.2 }; // Find the minimum value: gn.Minimize(inputs, outputs); // The solution will be at: double b1 = gn.Solution[0]; // will be 0.362 double b2 = gn.Solution[1]; // will be 0.556