# LinearDualCoordinateDescent Class

L2-regularized, L1 or L2-loss dual formulation Support Vector Machine learning (-s 1 and -s 3).
Inheritance Hierarchy
SystemObject
Accord.MachineLearningBinaryLearningBaseSupportVectorMachine, Double
Accord.MachineLearning.VectorMachines.LearningBaseSupportVectorClassificationSupportVectorMachine, Linear, Double
Accord.MachineLearning.VectorMachines.LearningBaseLinearDualCoordinateDescentSupportVectorMachine, Linear, Double
Accord.MachineLearning.VectorMachines.LearningLinearDualCoordinateDescent

Namespace:  Accord.MachineLearning.VectorMachines.Learning
Assembly:  Accord.MachineLearning (in Accord.MachineLearning.dll) Version: 3.5.0
Syntax
public class LinearDualCoordinateDescent : BaseLinearDualCoordinateDescent<SupportVectorMachine, Linear, double[]>,
ILinearSupportVectorMachineLearning, ISupervisedLearning<SupportVectorMachine, double[], double>,
ISupervisedLearning<SupportVectorMachine, double[], int>, ISupervisedLearning<SupportVectorMachine, double[], bool>,
ISupportVectorMachineLearning, ISupportVectorMachineLearning<double[]>, ISupervisedBinaryLearning<ISupportVectorMachine<double[]>, double[]>,
ISupervisedMulticlassLearning<ISupportVectorMachine<double[]>, double[]>, ISupervisedMultilabelLearning<ISupportVectorMachine<double[]>, double[]>,
ISupervisedLearning<ISupportVectorMachine<double[]>, double[], int[]>,
ISupervisedLearning<ISupportVectorMachine<double[]>, double[], bool[]>,
ISupervisedLearning<ISupportVectorMachine<double[]>, double[], int>,
ISupervisedLearning<ISupportVectorMachine<double[]>, double[], bool>,
ISupervisedLearning<ISupportVectorMachine<double[]>, double[], double>

The LinearDualCoordinateDescent type exposes the following members.

Constructors
Properties
NameDescription
C
Gets or sets the cost values associated with each input vector.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.)
Complexity
Complexity (cost) parameter C. Increasing the value of C forces the creation of a more accurate model that may not generalize well. If this value is not set and UseComplexityHeuristic is set to true, the framework will automatically guess a value for C. If this value is manually set to something else, then UseComplexityHeuristic will be automatically disabled and the given value will be used instead.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.)
Inputs
Gets or sets the input vectors for training.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.)
Kernel
Gets or sets the kernel function use to create a kernel Support Vector Machine. If this property is set, UseKernelEstimation will be set to false.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.)
Lagrange
Gets the value for the Lagrange multipliers (alpha) for every observation vector.
(Inherited from BaseLinearDualCoordinateDescentTModel, TKernel, TInput.)
Loss
Gets or sets the Loss cost function that should be optimized. Default is L2.
(Inherited from BaseLinearDualCoordinateDescentTModel, TKernel, TInput.)
Model
Gets or sets the classifier being learned.
(Inherited from BinaryLearningBaseTModel, TInput.)
NegativeWeight
Gets or sets the negative class weight. This should be a value higher than 0 indicating how much of the Complexity parameter C should be applied to instances carrying the negative label.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.)
Outputs
Gets or sets the output labels for each training vector.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.)
PositiveWeight
Gets or sets the positive class weight. This should be a value higher than 0 indicating how much of the Complexity parameter C should be applied to instances carrying the positive label.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.)
Token
Gets or sets a cancellation token that can be used to stop the learning algorithm while it is running.
(Inherited from BinaryLearningBaseTModel, TInput.)
Tolerance
Convergence tolerance. Default value is 0.1.
(Inherited from BaseLinearDualCoordinateDescentTModel, TKernel, TInput.)
UseClassProportions
Gets or sets a value indicating whether the weight ratio to be used between Complexity values for negative and positive instances should be computed automatically from the data proportions. Default is false.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.)
UseComplexityHeuristic
Gets or sets a value indicating whether the Complexity parameter C should be computed automatically by employing an heuristic rule. Default is true.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.)
UseKernelEstimation
Gets or sets whether initial values for some kernel parameters should be estimated from the data, if possible. Default is true.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.)
WeightRatio
Gets or sets the weight ratio between positive and negative class weights. This ratio controls how much of the Complexity parameter C should be applied to the positive class.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.)
Top
Methods
NameDescription
ComputeError Obsolete.
Computes the error rate for a given set of input and outputs.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.)
Create
Creates an instance of the model to be learned. Inheritors of this abstract class must define this method so new models can be created from the training data.
(Overrides BaseSupportVectorClassificationTModel, TKernel, TInputCreate(Int32, TKernel).)
Equals
Determines whether the specified object is equal to the current object.
(Inherited from Object.)
Finalize
Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection.
(Inherited from Object.)
GetHashCode
Serves as the default hash function.
(Inherited from Object.)
GetType
Gets the Type of the current instance.
(Inherited from Object.)
InnerRun
Runs the learning algorithm.
(Inherited from BaseLinearDualCoordinateDescentTModel, TKernel, TInput.)
Learn(TInput, Boolean, Double)
Learns a model that can map the given inputs to the given outputs.
(Inherited from BinaryLearningBaseTModel, TInput.)
Learn(TInput, Double, Double)
Learns a model that can map the given inputs to the given outputs.
(Inherited from BinaryLearningBaseTModel, TInput.)
Learn(TInput, Int32, Double)
Learns a model that can map the given inputs to the given outputs.
(Inherited from BinaryLearningBaseTModel, TInput.)
Learn(TInput, Int32, Double)
Learns a model that can map the given inputs to the given outputs.
(Inherited from BinaryLearningBaseTModel, TInput.)
Learn(TInput, Boolean, Double)
Learns a model that can map the given inputs to the given outputs.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.)
MemberwiseClone
Creates a shallow copy of the current Object.
(Inherited from Object.)
Run Obsolete.
Obsolete.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.)
Run(Boolean) Obsolete.
Obsolete.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.)
ToString
Returns a string that represents the current object.
(Inherited from Object.)
Top
Extension Methods
NameDescription
HasMethod
Checks whether an object implements a method with the given name.
(Defined by ExtensionMethods.)
IsEqual
Compares two objects for equality, performing an elementwise comparison if the elements are vectors or matrices.
(Defined by Matrix.)
Converts an object into another type, irrespective of whether the conversion can be done at compile time or not. This can be used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.)
Converts an object into another type, irrespective of whether the conversion can be done at compile time or not. This can be used to convert generic types to numeric types during runtime.
(Defined by Matrix.)
Top
Remarks

This class implements a SupportVectorMachine learning algorithm specifically crafted for linear machines only. It provides a L2-regularized, L1 or L2-loss coordinate descent learning algorithm for optimizing the dual form of learning. The code has been based on liblinear's method solve_l2r_l1l2_svc method, whose original description is provided below.

Liblinear's solver -s 1: L2R_L2LOSS_SVC_DUAL and -s 3: L2R_L1LOSS_SVC_DUAL. A coordinate descent algorithm for L1-loss and L2-loss SVM problems in the dual.

min_\alpha  0.5(\alpha^T (Q + D)\alpha) - e^T \alpha,
s.t.      0 <= \alpha_i <= upper_bound_i,

where Qij = yi yj xi^T xj and D is a diagonal matrix

In L1-SVM case:

upper_bound_i = Cp if y_i = 1
upper_bound_i = Cn if y_i = -1
D_ii = 0

In L2-SVM case:

upper_bound_i = INF
D_ii = 1/(2*Cp)    if y_i = 1
D_ii = 1/(2*Cn)    if y_i = -1

Given: x, y, Cp, Cn, and eps as the stopping tolerance

See Algorithm 3 of Hsieh et al., ICML 2008.

Examples

The next example shows how to solve a multi-class problem using a one-vs-one SVM where the binary machines are learned using the Linear Dual Coordinate Descent algorithm.

// Let's say we have the following data to be classified
// into three possible classes. Those are the samples:
//
double[][] inputs =
{
//               input         output
new double[] { 0, 1, 1, 0 }, //  0
new double[] { 0, 1, 0, 0 }, //  0
new double[] { 0, 0, 1, 0 }, //  0
new double[] { 0, 1, 1, 0 }, //  0
new double[] { 0, 1, 0, 0 }, //  0
new double[] { 1, 0, 0, 0 }, //  1
new double[] { 1, 0, 0, 0 }, //  1
new double[] { 1, 0, 0, 1 }, //  1
new double[] { 0, 0, 0, 1 }, //  1
new double[] { 0, 0, 0, 1 }, //  1
new double[] { 1, 1, 1, 1 }, //  2
new double[] { 1, 0, 1, 1 }, //  2
new double[] { 1, 1, 0, 1 }, //  2
new double[] { 0, 1, 1, 1 }, //  2
new double[] { 1, 1, 1, 1 }, //  2
};

int[] outputs = // those are the class labels
{
0, 0, 0, 0, 0,
1, 1, 1, 1, 1,
2, 2, 2, 2, 2,
};

// Create a one-vs-one multi-class SVM learning algorithm
var teacher = new MulticlassSupportVectorLearning<Linear>()
{
// using LIBLINEAR's L2-loss SVC dual for each SVM
Learner = (p) => new LinearDualCoordinateDescent()
{
Loss = Loss.L2
}
};

// Configure parallel execution options
teacher.ParallelOptions.MaxDegreeOfParallelism = 1;

// Learn a machine
var machine = teacher.Learn(inputs, outputs);

// Obtain class predictions for each sample
int[] predicted = machine.Decide(inputs);

// Compute classification error
double error = new ZeroOneLoss(outputs).Loss(predicted);

The following example shows how to obtain a MultipleLinearRegression from a linear SupportVectorMachine. It contains exactly the same data used in the OrdinaryLeastSquares documentation page for MultipleLinearRegression.

// Declare some training data. This is exactly the same
// data used in the MultipleLinearRegression documentation page

// We will try to model a plane as an equation in the form
// "ax + by + c = z". We have two input variables (x and y)
// and we will be trying to find two parameters a and b and
// an intercept term c.

// Create the linear-SVM learning algorithm
var teacher = new LinearDualCoordinateDescent()
{
Tolerance = 1e-10,
Complexity = 1e+10, // learn a hard-margin model
};

// Now suppose you have some points
double[][] inputs =
{
new double[] { 1, 1 },
new double[] { 0, 1 },
new double[] { 1, 0 },
new double[] { 0, 0 },
};

// located in the same Z (z = 1)
double[] outputs = { 1, 1, 1, 1 };

// Learn the support vector machine
var svm = teacher.Learn(inputs, outputs);

// Convert the svm to logistic regression
var regression = (MultipleLinearRegression)svm;

// As result, we will be given the following:
double a = regression.Weights[0]; // a = 0
double b = regression.Weights[1]; // b = 0
double c = regression.Intercept;  // c = 1

// This is the plane described by the equation
// ax + by + c = z => 0x + 0y + 1 = z => 1 = z.

// We can compute the predicted points using
double[] predicted = regression.Transform(inputs);

// And the squared error loss using
double error = new SquareLoss(outputs).Loss(predicted);