GaussNewton Class 
Namespace: Accord.Math.Optimization
public class GaussNewton : BaseLeastSquaresMethod, ILeastSquaresMethod, IConvergenceLearning
The GaussNewton type exposes the following members.
Name  Description  

GaussNewton 
Initializes a new instance of the GaussNewton class.
 
GaussNewton(Int32) 
Initializes a new instance of the GaussNewton class.

Name  Description  

Convergence 
Gets or sets the convergence verification method.
(Inherited from BaseLeastSquaresMethod.)  
CurrentIteration 
Gets the current iteration number.
(Inherited from BaseLeastSquaresMethod.)  
Deltas 
Gets the vector of coefficient updates computed in the last iteration.
 
Function 
Gets or sets a parameterized model function mapping input vectors
into output values, whose optimum parameters must be found.
(Inherited from BaseLeastSquaresMethod.)  
Gradient 
Gets or sets a function that computes the gradient vector in respect
to the function parameters, given a set of input and output values.
(Inherited from BaseLeastSquaresMethod.)  
HasConverged 
Gets whether the algorithm has converged.
(Inherited from BaseLeastSquaresMethod.)  
Hessian 
Gets the approximate Hessian matrix of second derivatives
created during the last algorithm iteration.
 
Iterations  Obsolete.
Please use MaxIterations instead.
(Inherited from BaseLeastSquaresMethod.)  
Jacobian 
Gets the Jacobian matrix of first derivatives computed in the
last iteration.
 
MaxIterations 
Gets or sets the maximum number of iterations
performed by the iterative algorithm. Default
is 100.
(Inherited from BaseLeastSquaresMethod.)  
NumberOfParameters 
Gets the number of variables (free parameters) in the optimization problem.
(Inherited from BaseLeastSquaresMethod.)  
NumberOfVariables  Obsolete.
Gets the number of variables (free parameters) in the optimization problem.
(Inherited from BaseLeastSquaresMethod.)  
ParallelOptions 
Gets or sets the parallelization options for this algorithm.
(Inherited from ParallelLearningBase.)  
Residuals 
Gets the vector of residuals computed in the last iteration.
The residuals are computed as (y  f(w, x)), in which
y are the expected output values, and f is the
parameterized model function.
 
Solution 
Gets the solution found, the values of the parameters which
optimizes the function, in a least squares sense.
(Inherited from BaseLeastSquaresMethod.)  
StandardErrors 
Gets standard error for each parameter in the solution.
 
Token 
Gets or sets a cancellation token that can be used
to cancel the algorithm while it is running.
(Inherited from ParallelLearningBase.)  
Tolerance 
Gets or sets the maximum relative change in the watched value
after an iteration of the algorithm used to detect convergence.
Default is zero.
(Inherited from BaseLeastSquaresMethod.)  
Value 
Gets the value at the solution found. This should be
the minimum value found for the objective function.
(Inherited from BaseLeastSquaresMethod.) 
Name  Description  

ComputeError 
Compute model error for a given data set.
(Inherited from BaseLeastSquaresMethod.)  
Equals  Determines whether the specified object is equal to the current object. (Inherited from Object.)  
Finalize  Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. (Inherited from Object.)  
GetHashCode  Serves as the default hash function. (Inherited from Object.)  
GetType  Gets the Type of the current instance. (Inherited from Object.)  
Initialize 
This method should be implemented by child classes to initialize
their fields once the NumberOfParameters is known.
(Overrides BaseLeastSquaresMethodInitialize.)  
MemberwiseClone  Creates a shallow copy of the current Object. (Inherited from Object.)  
Minimize 
Attempts to find the best values for the parameter vector
minimizing the discrepancy between the generated outputs
and the expected outputs for a given set of input data.
 
ToString  Returns a string that represents the current object. (Inherited from Object.) 
Name  Description  

HasMethod 
Checks whether an object implements a method with the given name.
(Defined by ExtensionMethods.)  
IsEqual 
Compares two objects for equality, performing an elementwise
comparison if the elements are vectors or matrices.
(Defined by Matrix.)  
To(Type)  Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.)  
ToT  Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.) 
While it is possible to use the GaussNewton class as a standalone method for solving least squares problems, this class is intended to be used as a strategy for NonlinearLeastSquares, as shown in the example below:
// Suppose we would like to map the continuous values in the // second row to the integer values in the first row. double[,] data = { { 0.03, 0.1947, 0.425, 0.626, 1.253, 2.500, 3.740 }, { 0.05, 0.127, 0.094, 0.2122, 0.2729, 0.2665, 0.3317} }; // Extract inputs and outputs double[][] inputs = data.GetRow(0).ToJagged(); double[] outputs = data.GetRow(1); // Create a Nonlinear regression using var nls = new NonlinearLeastSquares() { // Initialize to some random values StartValues = new[] { 0.9, 0.2 }, // Let's assume a quadratic model function: ax² + bx + c Function = (w, x) => (w[0] * x[0]) / (w[1] + x[0]), // Derivative in respect to the weights: Gradient = (w, x, r) => { r[0] = ((x[0]) / (w[1] + x[0])); r[1] = ((w[0] * x[0]) / Math.Pow(w[1] + x[0], 2)); }, Algorithm = new GaussNewton() { MaxIterations = 0, Tolerance = 1e5 } }; var regression = nls.Learn(inputs, outputs); // Use the function to compute the input values double[] predict = regression.Transform(inputs);
However, as mentioned above it is also possible to use GaussNewton as a standalone class, as shown in the example below:
// Example from https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm // In this example, the Gauss–Newton algorithm will be used to fit a model to // some data by minimizing the sum of squares of errors between the data and // model's predictions. // In a biology experiment studying the relation between substrate concentration [S] // and reaction rate in an enzymemediated reaction, the data in the following table // were obtained: double[][] inputs = Jagged.ColumnVector(new [] { 0.03, 0.1947, 0.425, 0.626, 1.253, 2.500, 3.740 }); double[] outputs = new[] { 0.05, 0.127, 0.094, 0.2122, 0.2729, 0.2665, 0.3317 }; // It is desired to find a curve (model function) of the form // // rate = \frac{V_{max}[S]}{K_M+[S]} // // that fits best the data in the least squares sense, with the parameters V_max // and K_M to be determined. Let's start by writing model equation below: LeastSquaresFunction function = (double[] parameters, double[] input) => { return (parameters[0] * input[0]) / (parameters[1] + input[0]); }; // Now, we can either write the gradient function of the model by hand or let // the model compute it automatically using Newton's finite differences method: LeastSquaresGradientFunction gradient = (double[] parameters, double[] input, double[] result) => { result[0] = ((input[0]) / (parameters[1] + input[0])); result[1] = ((parameters[0] * input[0]) / Math.Pow(parameters[1] + input[0], 2)); }; // Create a new GaussNewton algorithm var gn = new GaussNewton(parameters: 2) { Function = function, Gradient = gradient, Solution = new[] { 0.9, 0.2 } // starting from b1 = 0.9 and b2 = 0.2 }; // Find the minimum value: gn.Minimize(inputs, outputs); // The solution will be at: double b1 = gn.Solution[0]; // will be 0.362 double b2 = gn.Solution[1]; // will be 0.556