Click or drag to resize
Accord.NET (logo)

LevenbergMarquardt Class

Levenberg-Marquardt algorithm for solving Least-Squares problems.
Inheritance Hierarchy

Namespace:  Accord.Math.Optimization
Assembly:  Accord.Math (in Accord.Math.dll) Version: 3.8.0
public class LevenbergMarquardt : BaseLeastSquaresMethod, 
	ILeastSquaresMethod, IConvergenceLearning
Request Example View Source

The LevenbergMarquardt type exposes the following members.

Public methodLevenbergMarquardt
Initializes a new instance of the LevenbergMarquardt class.
Public methodLevenbergMarquardt(Int32)
Initializes a new instance of the LevenbergMarquardt class.
Public propertyAdjustment
Learning rate adjustment.
Public propertyBlocks
Gets or sets the number of blocks to divide the Jacobian matrix in the Hessian calculation to preserve memory. Default is 1.
Protected propertyConvergence
Gets or sets the convergence verification method.
(Inherited from BaseLeastSquaresMethod.)
Public propertyCurrentIteration
Gets the current iteration number.
(Inherited from BaseLeastSquaresMethod.)
Public propertyFunction
Gets or sets a parameterized model function mapping input vectors into output values, whose optimum parameters must be found.
(Inherited from BaseLeastSquaresMethod.)
Public propertyGradient
Gets or sets a function that computes the gradient vector in respect to the function parameters, given a set of input and output values.
(Inherited from BaseLeastSquaresMethod.)
Public propertyHasConverged
Gets whether the algorithm has converged.
(Inherited from BaseLeastSquaresMethod.)
Public propertyHessian
Gets the approximate Hessian matrix of second derivatives generated in the last algorithm iteration. The Hessian is stored in the upper triangular part of this matrix. See remarks for details.
Public propertyIterations Obsolete.
Please use MaxIterations instead.
(Inherited from BaseLeastSquaresMethod.)
Public propertyLearningRate
Levenberg's damping factor, also known as lambda.
Public propertyMaxIterations
Gets or sets the maximum number of iterations performed by the iterative algorithm. Default is 100.
(Inherited from BaseLeastSquaresMethod.)
Public propertyNumberOfParameters
Gets the number of variables (free parameters) in the optimization problem.
(Inherited from BaseLeastSquaresMethod.)
Public propertyNumberOfVariables Obsolete.
Gets the number of variables (free parameters) in the optimization problem.
(Inherited from BaseLeastSquaresMethod.)
Public propertyParallelOptions
Gets or sets the parallelization options for this algorithm.
(Inherited from ParallelLearningBase.)
Public propertySolution
Gets the solution found, the values of the parameters which optimizes the function, in a least squares sense.
(Inherited from BaseLeastSquaresMethod.)
Public propertyStandardErrors
Gets standard error for each parameter in the solution.
Public propertyToken
Gets or sets a cancellation token that can be used to cancel the algorithm while it is running.
(Inherited from ParallelLearningBase.)
Public propertyTolerance
Gets or sets the maximum relative change in the watched value after an iteration of the algorithm used to detect convergence. Default is zero.
(Inherited from BaseLeastSquaresMethod.)
Public propertyValue
Gets the value at the solution found. This should be the minimum value found for the objective function.
(Inherited from BaseLeastSquaresMethod.)
Public methodComputeError
Compute model error for a given data set.
(Inherited from BaseLeastSquaresMethod.)
Public methodEquals
Determines whether the specified object is equal to the current object.
(Inherited from Object.)
Protected methodFinalize
Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection.
(Inherited from Object.)
Public methodGetHashCode
Serves as the default hash function.
(Inherited from Object.)
Public methodGetType
Gets the Type of the current instance.
(Inherited from Object.)
Protected methodInitialize
This method should be implemented by child classes to initialize their fields once the NumberOfParameters is known.
(Overrides BaseLeastSquaresMethodInitialize.)
Protected methodMemberwiseClone
Creates a shallow copy of the current Object.
(Inherited from Object.)
Public methodMinimize
Attempts to find the best values for the parameter vector minimizing the discrepancy between the generated outputs and the expected outputs for a given set of input data.
Public methodToString
Returns a string that represents the current object.
(Inherited from Object.)
Extension Methods
Public Extension MethodHasMethod
Checks whether an object implements a method with the given name.
(Defined by ExtensionMethods.)
Public Extension MethodIsEqual
Compares two objects for equality, performing an elementwise comparison if the elements are vectors or matrices.
(Defined by Matrix.)
Public Extension MethodTo(Type)Overloaded.
Converts an object into another type, irrespective of whether the conversion can be done at compile time or not. This can be used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.)
Public Extension MethodToTOverloaded.
Converts an object into another type, irrespective of whether the conversion can be done at compile time or not. This can be used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.)

While it is possible to use the LevenbergMarquardt class as a standalone method for solving least squares problems, this class is intended to be used as a strategy for NonlinearLestSquares, as shown in the example below:

// Suppose we would like to map the continuous values in the
// second column to the integer values in the first column.
double[,] data =
    { -40,    -21142.1111111111 },
    { -30,    -21330.1111111111 },
    { -20,    -12036.1111111111 },
    { -10,      7255.3888888889 },
    {   0,     32474.8888888889 },
    {  10,     32474.8888888889 },
    {  20,      9060.8888888889 },
    {  30,    -11628.1111111111 },
    {  40,    -15129.6111111111 },

// Extract inputs and outputs
double[][] inputs = data.GetColumn(0).ToJagged();
double[] outputs = data.GetColumn(1);

// Create a Nonlinear regression using 
var nls = new NonlinearLeastSquares()
    NumberOfParameters = 3,

    // Initialize to some random values
    StartValues = new[] { 4.2, 0.3, 1 },

    // Let's assume a quadratic model function: ax² + bx + c
    Function = (w, x) => w[0] * x[0] * x[0] + w[1] * x[0] + w[2],

    // Derivative in respect to the weights:
    Gradient = (w, x, r) =>
        r[0] = w[0]* w[0]; // w.r.t a: a²  //²+%2B+bx+%2B+c+w.r.t.+a
        r[1] = w[0];       // w.r.t b: b   //²+%2B+bx+%2B+c+w.r.t.+b
        r[2] = 1;          // w.r.t c: 1   //²+%2B+bx+%2B+c+w.r.t.+c

    Algorithm = new LevenbergMarquardt()
        MaxIterations = 100,
        Tolerance = 0

var regression = nls.Learn(inputs, outputs);

// Use the function to compute the input values
double[] predict = regression.Transform(inputs);

However, as mentioned above it is also possible to use LevenbergMarquardt as a standalone class, as shown in the example below:

// Example from

// In this example, the Gauss–Newton algorithm will be used to fit a model to 
// some data by minimizing the sum of squares of errors between the data and 
// model's predictions.

// In a biology experiment studying the relation between substrate concentration [S]
// and reaction rate in an enzyme-mediated reaction, the data in the following table
// were obtained:

double[][] inputs = Jagged.ColumnVector(new [] { 0.03, 0.1947, 0.425, 0.626, 1.253, 2.500, 3.740 });
double[] outputs = new[] { 0.05, 0.127, 0.094, 0.2122, 0.2729, 0.2665, 0.3317 };

// It is desired to find a curve (model function) of the form
//   rate = \frac{V_{max}[S]}{K_M+[S]}
// that fits best the data in the least squares sense, with the parameters V_max
// and K_M to be determined. Let's start by writing model equation below:

LeastSquaresFunction function = (double[] parameters, double[] input) =>
    return (parameters[0] * input[0]) / (parameters[1] + input[0]);

// Now, we can either write the gradient function of the model by hand or let
// the model compute it automatically using Newton's finite differences method:

LeastSquaresGradientFunction gradient = (double[] parameters, double[] input, double[] result) =>
    result[0] = -((-input[0]) / (parameters[1] + input[0]));
    result[1] = -((parameters[0] * input[0]) / Math.Pow(parameters[1] + input[0], 2));

// Create a new Levenberg-Marquardt algorithm
var gn = new LevenbergMarquardt(parameters: 2)
    Function = function,
    Gradient = gradient,
    Solution = new[] { 0.9, 0.2 } // starting from b1 = 0.9 and b2 = 0.2

// Find the minimum value:
gn.Minimize(inputs, outputs);

// The solution will be at:
double b1 = gn.Solution[0]; // will be 0.362
double b2 = gn.Solution[1]; // will be 0.556
See Also