Click or drag to resize
Accord.NET (logo)

GaussNewton Class

Gauss-Newton algorithm for solving Least-Squares problems.
Inheritance Hierarchy
SystemObject
  Accord.MachineLearningParallelLearningBase
    Accord.Math.OptimizationBaseLeastSquaresMethod
      Accord.Math.OptimizationGaussNewton

Namespace:  Accord.Math.Optimization
Assembly:  Accord.Math (in Accord.Math.dll) Version: 3.8.0
Syntax
public class GaussNewton : BaseLeastSquaresMethod, 
	ILeastSquaresMethod, IConvergenceLearning
Request Example View Source

The GaussNewton type exposes the following members.

Constructors
  NameDescription
Public methodGaussNewton
Initializes a new instance of the GaussNewton class.
Public methodGaussNewton(Int32)
Initializes a new instance of the GaussNewton class.
Top
Properties
  NameDescription
Protected propertyConvergence
Gets or sets the convergence verification method.
(Inherited from BaseLeastSquaresMethod.)
Public propertyCurrentIteration
Gets the current iteration number.
(Inherited from BaseLeastSquaresMethod.)
Public propertyDeltas
Gets the vector of coefficient updates computed in the last iteration.
Public propertyFunction
Gets or sets a parameterized model function mapping input vectors into output values, whose optimum parameters must be found.
(Inherited from BaseLeastSquaresMethod.)
Public propertyGradient
Gets or sets a function that computes the gradient vector in respect to the function parameters, given a set of input and output values.
(Inherited from BaseLeastSquaresMethod.)
Public propertyHasConverged
Gets whether the algorithm has converged.
(Inherited from BaseLeastSquaresMethod.)
Public propertyHessian
Gets the approximate Hessian matrix of second derivatives created during the last algorithm iteration.
Public propertyIterations Obsolete.
Please use MaxIterations instead.
(Inherited from BaseLeastSquaresMethod.)
Public propertyJacobian
Gets the Jacobian matrix of first derivatives computed in the last iteration.
Public propertyMaxIterations
Gets or sets the maximum number of iterations performed by the iterative algorithm. Default is 100.
(Inherited from BaseLeastSquaresMethod.)
Public propertyNumberOfParameters
Gets the number of variables (free parameters) in the optimization problem.
(Inherited from BaseLeastSquaresMethod.)
Public propertyNumberOfVariables Obsolete.
Gets the number of variables (free parameters) in the optimization problem.
(Inherited from BaseLeastSquaresMethod.)
Public propertyParallelOptions
Gets or sets the parallelization options for this algorithm.
(Inherited from ParallelLearningBase.)
Public propertyResiduals
Gets the vector of residuals computed in the last iteration. The residuals are computed as (y - f(w, x)), in which y are the expected output values, and f is the parameterized model function.
Public propertySolution
Gets the solution found, the values of the parameters which optimizes the function, in a least squares sense.
(Inherited from BaseLeastSquaresMethod.)
Public propertyStandardErrors
Gets standard error for each parameter in the solution.
Public propertyToken
Gets or sets a cancellation token that can be used to cancel the algorithm while it is running.
(Inherited from ParallelLearningBase.)
Public propertyTolerance
Gets or sets the maximum relative change in the watched value after an iteration of the algorithm used to detect convergence. Default is zero.
(Inherited from BaseLeastSquaresMethod.)
Public propertyValue
Gets the value at the solution found. This should be the minimum value found for the objective function.
(Inherited from BaseLeastSquaresMethod.)
Top
Methods
  NameDescription
Public methodComputeError
Compute model error for a given data set.
(Inherited from BaseLeastSquaresMethod.)
Public methodEquals
Determines whether the specified object is equal to the current object.
(Inherited from Object.)
Protected methodFinalize
Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection.
(Inherited from Object.)
Public methodGetHashCode
Serves as the default hash function.
(Inherited from Object.)
Public methodGetType
Gets the Type of the current instance.
(Inherited from Object.)
Protected methodInitialize
This method should be implemented by child classes to initialize their fields once the NumberOfParameters is known.
(Overrides BaseLeastSquaresMethodInitialize.)
Protected methodMemberwiseClone
Creates a shallow copy of the current Object.
(Inherited from Object.)
Public methodMinimize
Attempts to find the best values for the parameter vector minimizing the discrepancy between the generated outputs and the expected outputs for a given set of input data.
Public methodToString
Returns a string that represents the current object.
(Inherited from Object.)
Top
Extension Methods
  NameDescription
Public Extension MethodHasMethod
Checks whether an object implements a method with the given name.
(Defined by ExtensionMethods.)
Public Extension MethodIsEqual
Compares two objects for equality, performing an elementwise comparison if the elements are vectors or matrices.
(Defined by Matrix.)
Public Extension MethodTo(Type)Overloaded.
Converts an object into another type, irrespective of whether the conversion can be done at compile time or not. This can be used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.)
Public Extension MethodToTOverloaded.
Converts an object into another type, irrespective of whether the conversion can be done at compile time or not. This can be used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.)
Top
Remarks
This class isn't suitable for most real-world problems. Instead, this class is intended to be use as a baseline comparison to help debug and check other optimization methods, such as LevenbergMarquardt.
Examples

While it is possible to use the GaussNewton class as a standalone method for solving least squares problems, this class is intended to be used as a strategy for NonlinearLeastSquares, as shown in the example below:

// Suppose we would like to map the continuous values in the
// second row to the integer values in the first row.
double[,] data =
{
    { 0.03, 0.1947, 0.425, 0.626, 1.253, 2.500, 3.740 },
    { 0.05, 0.127, 0.094, 0.2122, 0.2729, 0.2665, 0.3317}
};

// Extract inputs and outputs
double[][] inputs = data.GetRow(0).ToJagged();
double[] outputs = data.GetRow(1);

// Create a Nonlinear regression using 
var nls = new NonlinearLeastSquares()
{
    // Initialize to some random values
    StartValues = new[] { 0.9, 0.2 },

    // Let's assume a quadratic model function: ax² + bx + c
    Function = (w, x) => (w[0] * x[0]) / (w[1] + x[0]),

    // Derivative in respect to the weights:
    Gradient = (w, x, r) =>
    {
        r[0] = -((-x[0]) / (w[1] + x[0]));
        r[1] = -((w[0] * x[0]) / Math.Pow(w[1] + x[0], 2));
    },

    Algorithm = new GaussNewton()
    {
        MaxIterations = 0,
        Tolerance = 1e-5
    }
};


var regression = nls.Learn(inputs, outputs);

// Use the function to compute the input values
double[] predict = regression.Transform(inputs);

However, as mentioned above it is also possible to use GaussNewton as a standalone class, as shown in the example below:

// Example from https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm

// In this example, the Gauss–Newton algorithm will be used to fit a model to 
// some data by minimizing the sum of squares of errors between the data and 
// model's predictions.

// In a biology experiment studying the relation between substrate concentration [S]
// and reaction rate in an enzyme-mediated reaction, the data in the following table
// were obtained:

double[][] inputs = Jagged.ColumnVector(new [] { 0.03, 0.1947, 0.425, 0.626, 1.253, 2.500, 3.740 });
double[] outputs = new[] { 0.05, 0.127, 0.094, 0.2122, 0.2729, 0.2665, 0.3317 };

// It is desired to find a curve (model function) of the form
// 
//   rate = \frac{V_{max}[S]}{K_M+[S]}
// 
// that fits best the data in the least squares sense, with the parameters V_max
// and K_M to be determined. Let's start by writing model equation below:

LeastSquaresFunction function = (double[] parameters, double[] input) =>
{
    return (parameters[0] * input[0]) / (parameters[1] + input[0]);
};

// Now, we can either write the gradient function of the model by hand or let
// the model compute it automatically using Newton's finite differences method:

LeastSquaresGradientFunction gradient = (double[] parameters, double[] input, double[] result) =>
{
    result[0] = -((-input[0]) / (parameters[1] + input[0]));
    result[1] = -((parameters[0] * input[0]) / Math.Pow(parameters[1] + input[0], 2));
};

// Create a new Gauss-Newton algorithm
var gn = new GaussNewton(parameters: 2)
{
    Function = function,
    Gradient = gradient,
    Solution = new[] { 0.9, 0.2 } // starting from b1 = 0.9 and b2 = 0.2
};

// Find the minimum value:
gn.Minimize(inputs, outputs);

// The solution will be at:
double b1 = gn.Solution[0]; // will be 0.362
double b2 = gn.Solution[1]; // will be 0.556
See Also