Accord.NET Framework

## SequentialMinimalOptimizationTKernel Class |

Sequential Minimal Optimization (SMO) Algorithm.

Inheritance Hierarchy

SystemObject

Accord.MachineLearningBinaryLearningBaseSupportVectorMachineTKernel, Double

Accord.MachineLearning.VectorMachines.LearningBaseSupportVectorClassificationSupportVectorMachineTKernel, TKernel, Double

Accord.MachineLearning.VectorMachines.LearningBaseSequentialMinimalOptimizationSupportVectorMachineTKernel, TKernel, Double

Accord.MachineLearning.VectorMachines.LearningSequentialMinimalOptimizationTKernel

Accord.MachineLearningBinaryLearningBaseSupportVectorMachineTKernel, Double

Accord.MachineLearning.VectorMachines.LearningBaseSupportVectorClassificationSupportVectorMachineTKernel, TKernel, Double

Accord.MachineLearning.VectorMachines.LearningBaseSequentialMinimalOptimizationSupportVectorMachineTKernel, TKernel, Double

Accord.MachineLearning.VectorMachines.LearningSequentialMinimalOptimizationTKernel

Syntax

public class SequentialMinimalOptimization<TKernel> : BaseSequentialMinimalOptimization<SupportVectorMachine<TKernel>, TKernel, double[]> where TKernel : Object, IKernel<double[]>

- TKernel

The SequentialMinimalOptimizationTKernel type exposes the following members.

Constructors

Name | Description | |
---|---|---|

SequentialMinimalOptimizationTKernel | Initializes a new instance of the SequentialMinimalOptimizationTKernel class |

Properties

Name | Description | |
---|---|---|

ActiveExamples |
Gets the indices of the active examples (examples which have
the corresponding Lagrange multiplier different than zero).
(Inherited from BaseSequentialMinimalOptimizationTModel, TKernel, TInput.) | |

BoundedExamples |
Gets the indices of the examples at the boundary (examples
which have the corresponding Lagrange multipliers equal to C).
(Inherited from BaseSequentialMinimalOptimizationTModel, TKernel, TInput.) | |

C |
Gets or sets the cost values associated with each input vector.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.) | |

CacheSize |
Gets or sets the cache size to partially store the kernel
matrix. Default is the same number of input vectors, meaning
the entire kernel matrix will be computed and cached in memory.
If set to zero, the cache will be disabled and all operations will
be computed as needed.
(Inherited from BaseSequentialMinimalOptimizationTModel, TKernel, TInput.) | |

Compact | Obsolete.
Gets or sets whether to produce compact models. Compact
formulation is currently limited to linear models.
(Inherited from BaseSequentialMinimalOptimizationTModel, TKernel, TInput.) | |

Complexity |
Complexity (cost) parameter C. Increasing the value of C forces the creation
of a more accurate model that may not generalize well. If this value is not
set and UseComplexityHeuristic is set to true, the framework
will automatically guess a value for C. If this value is manually set to
something else, then UseComplexityHeuristic will be automatically
disabled and the given value will be used instead.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.) | |

Epsilon |
Epsilon for round-off errors. Default value is 1e-6.
(Inherited from BaseSequentialMinimalOptimizationTModel, TKernel, TInput.) | |

Inputs |
Gets or sets the input vectors for training.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.) | |

Kernel |
Gets or sets the kernel function use to create a
kernel Support Vector Machine. If this property
is set, UseKernelEstimation will be
set to false.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.) | |

Lagrange |
Gets the value for the Lagrange multipliers
(alpha) for every observation vector.
(Inherited from BaseSequentialMinimalOptimizationTModel, TKernel, TInput.) | |

Model |
Gets or sets the classifier being learned.
(Inherited from BinaryLearningBaseTModel, TInput.) | |

NegativeWeight |
Gets or sets the negative class weight. This should be a
value higher than 0 indicating how much of the Complexity
parameter C should be applied to instances carrying the negative label.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.) | |

NonBoundExamples |
Gets the indices of the non-bounded examples (examples which
have the corresponding Lagrange multipliers between 0 and C).
(Inherited from BaseSequentialMinimalOptimizationTModel, TKernel, TInput.) | |

Outputs |
Gets or sets the output labels for each training vector.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.) | |

PositiveWeight |
Gets or sets the positive class weight. This should be a
value higher than 0 indicating how much of the Complexity
parameter C should be applied to instances carrying the positive label.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.) | |

Shrinking |
Gets or sets a value indicating whether shrinking heuristics should be used. Default is false. Note:
this property can only be used when Strategy is set to SecondOrder.
(Inherited from BaseSequentialMinimalOptimizationTModel, TKernel, TInput.) | |

Strategy |
Gets or sets the pair selection
strategy to be used during optimization.
(Inherited from BaseSequentialMinimalOptimizationTModel, TKernel, TInput.) | |

Token |
Gets or sets a cancellation token that can be used to
stop the learning algorithm while it is running.
(Inherited from BinaryLearningBaseTModel, TInput.) | |

Tolerance |
Convergence tolerance. Default value is 1e-2.
(Inherited from BaseSequentialMinimalOptimizationTModel, TKernel, TInput.) | |

UseClassProportions |
Gets or sets a value indicating whether the weight ratio to be used between
Complexity values for negative and positive instances should
be computed automatically from the data proportions. Default is false.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.) | |

UseComplexityHeuristic |
Gets or sets a value indicating whether the Complexity parameter C
should be computed automatically by employing an heuristic rule.
Default is true.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.) | |

UseKernelEstimation |
Gets or sets whether initial values for some kernel parameters
should be estimated from the data, if possible. Default is true.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.) | |

WeightRatio |
Gets or sets the weight ratio between positive and negative class
weights. This ratio controls how much of the Complexity
parameter C should be applied to the positive class.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.) |

Methods

Name | Description | |
---|---|---|

ComputeError | Obsolete.
Computes the error rate for a given set of input and outputs.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.) | |

Create |
Creates an instance of the model to be learned. Inheritors
of this abstract class must define this method so new models
can be created from the training data.
(Overrides BaseSupportVectorClassificationTModel, TKernel, TInputCreate(Int32, TKernel).) | |

Equals | Determines whether the specified object is equal to the current object. (Inherited from Object.) | |

Finalize | Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. (Inherited from Object.) | |

GetHashCode | Serves as the default hash function. (Inherited from Object.) | |

GetType | Gets the Type of the current instance. (Inherited from Object.) | |

InnerRun |
Runs the learning algorithm.
(Inherited from BaseSequentialMinimalOptimizationTModel, TKernel, TInput.) | |

Learn(TInput, Boolean, Double) |
Learns a model that can map the given inputs to the given outputs.
(Inherited from BinaryLearningBaseTModel, TInput.) | |

Learn(TInput, Double, Double) |
Learns a model that can map the given inputs to the given outputs.
(Inherited from BinaryLearningBaseTModel, TInput.) | |

Learn(TInput, Int32, Double) |
Learns a model that can map the given inputs to the given outputs.
(Inherited from BinaryLearningBaseTModel, TInput.) | |

Learn(TInput, Int32, Double) |
Learns a model that can map the given inputs to the given outputs.
(Inherited from BinaryLearningBaseTModel, TInput.) | |

Learn(TInput, Boolean, Double) |
Learns a model that can map the given inputs to the given outputs.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.) | |

MemberwiseClone | Creates a shallow copy of the current Object. (Inherited from Object.) | |

Run | Obsolete.
Obsolete.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.) | |

Run(Boolean) | Obsolete.
Obsolete.
(Inherited from BaseSupportVectorClassificationTModel, TKernel, TInput.) | |

ToString | Returns a string that represents the current object. (Inherited from Object.) |

Extension Methods

Name | Description | |
---|---|---|

HasMethod |
Checks whether an object implements a method with the given name.
(Defined by ExtensionMethods.) | |

IsEqual |
Compares two objects for equality, performing an elementwise
comparison if the elements are vectors or matrices.
(Defined by Matrix.) | |

To(Type) | Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.) | |

ToT | Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.) |

Remarks

The SMO algorithm is an algorithm for solving large quadratic programming (QP) optimization problems, widely used for the training of support vector machines. First developed by John C. Platt in 1998, SMO breaks up large QP problems into a series of smallest possible QP problems, which are then solved analytically.

This class follows the original algorithm by Platt with additional modifications by Keerthi et al.

This class can also be used in combination with MulticlassSupportVectorLearningTKernel or MultilabelSupportVectorLearningTKernel to learn MulticlassSupportVectorMachineTKernels using the one-vs-one or one-vs-all multi-class decision strategies, respectively.

References:

- Wikipedia, The Free Encyclopedia. Sequential Minimal Optimization. Available on: http://en.wikipedia.org/wiki/Sequential_Minimal_Optimization
- John C. Platt, Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines. 1998. Available on: http://research.microsoft.com/en-us/um/people/jplatt/smoTR.pdf
- S. S. Keerthi et al. Improvements to Platt's SMO Algorithm for SVM Classifier Design. Technical Report CD-99-14. Available on: http://www.cs.iastate.edu/~honavar/keerthi-svm.pdf
- J. P. Lewis. A Short SVM (Support Vector Machine) Tutorial. Available on: http://www.idiom.com/~zilla/Work/Notes/svmtutorial.pdf

Examples

The following example shows how to use a SVM to learn a simple XOR function.

// As an example, we will try to learn a decision machine // that can replicate the "exclusive-or" logical function: double[][] inputs = { new double[] { 0, 0 }, // the XOR function takes two booleans new double[] { 0, 1 }, // and computes their exclusive or: the new double[] { 1, 0 }, // output is true only if the two booleans new double[] { 1, 1 } // are different }; int[] xor = // this is the output of the xor function { 0, // 0 xor 0 = 0 (inputs are equal) 1, // 0 xor 1 = 1 (inputs are different) 1, // 1 xor 0 = 1 (inputs are different) 0, // 1 xor 1 = 0 (inputs are equal) }; // Now, we can create the sequential minimal optimization teacher var learn = new SequentialMinimalOptimization<Gaussian>() { UseComplexityHeuristic = true, UseKernelEstimation = true }; // And then we can obtain a trained SVM by calling its Learn method SupportVectorMachine<Gaussian> svm = learn.Learn(inputs, xor); // Finally, we can obtain the decisions predicted by the machine: bool[] prediction = svm.Decide(inputs);

The next example shows how to solve a multi-class problem using a one-vs-one SVM where the binary machines are learned using SMO.

// Generate always same random numbers Accord.Math.Random.Generator.Seed = 0; // The following is a simple auto association function in which // the last column of each input correspond to its own class. This // problem should be easily solved using a Linear kernel. // Sample input data double[][] inputs = { new double[] { 1, 2, 0 }, new double[] { 6, 2, 3 }, new double[] { 1, 1, 1 }, new double[] { 7, 6, 2 }, }; // Output for each of the inputs int[] outputs = { 0, 3, 1, 2 }; // Create the multi-class learning algorithm for the machine var teacher = new MulticlassSupportVectorLearning<Linear>() { // Configure the learning algorithm to use SMO to train the // underlying SVMs in each of the binary class subproblems. Learner = (param) => new SequentialMinimalOptimization<Linear>() { // If you would like to use other kernels, simply replace // the generic parameter to the desired kernel class, such // as for example, Polynomial or Gaussian: Kernel = new Linear() // use the Linear kernel } }; // Estimate the multi-class support vector machine using one-vs-one method MulticlassSupportVectorMachine<Linear> ovo = teacher.Learn(inputs, outputs); // Obtain class predictions for each sample int[] predicted = ovo.Decide(inputs); // Compute classification error double error = new ZeroOneLoss(outputs).Loss(predicted);

The same as before, but using a Gaussian kernel.

// Let's say we have the following data to be classified // into three possible classes. Those are the samples: // double[][] inputs = { // input output new double[] { 0, 1, 1, 0 }, // 0 new double[] { 0, 1, 0, 0 }, // 0 new double[] { 0, 0, 1, 0 }, // 0 new double[] { 0, 1, 1, 0 }, // 0 new double[] { 0, 1, 0, 0 }, // 0 new double[] { 1, 0, 0, 0 }, // 1 new double[] { 1, 0, 0, 0 }, // 1 new double[] { 1, 0, 0, 1 }, // 1 new double[] { 0, 0, 0, 1 }, // 1 new double[] { 0, 0, 0, 1 }, // 1 new double[] { 1, 1, 1, 1 }, // 2 new double[] { 1, 0, 1, 1 }, // 2 new double[] { 1, 1, 0, 1 }, // 2 new double[] { 0, 1, 1, 1 }, // 2 new double[] { 1, 1, 1, 1 }, // 2 }; int[] outputs = // those are the class labels { 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, }; // Create the multi-class learning algorithm for the machine var teacher = new MulticlassSupportVectorLearning<Gaussian>() { // Configure the learning algorithm to use SMO to train the // underlying SVMs in each of the binary class subproblems. Learner = (param) => new SequentialMinimalOptimization<Gaussian>() { // Estimate a suitable guess for the Gaussian kernel's parameters. // This estimate can serve as a starting point for a grid search. UseKernelEstimation = true } }; // The following line is only needed to ensure reproducible results. Please remove it to enable full parallelization teacher.ParallelOptions.MaxDegreeOfParallelism = 1; // (Remove, comment, or change this line to enable full parallelism) // Learn a machine var machine = teacher.Learn(inputs, outputs); // Obtain class predictions for each sample int[] predicted = machine.Decide(inputs); // Get class scores for each sample double[] scores = machine.Score(inputs); // Compute classification error double error = new ZeroOneLoss(outputs).Loss(predicted);

The following example shows how to learn a simple binary SVM using a precomputed kernel matrix obtained from a Gaussian kernel.

// As an example, we will try to learn a decision machine // that can replicate the "exclusive-or" logical function: double[][] inputs = { new double[] { 0, 0 }, // the XOR function takes two booleans new double[] { 0, 1 }, // and computes their exclusive or: the new double[] { 1, 0 }, // output is true only if the two booleans new double[] { 1, 1 } // are different }; int[] xor = // this is the output of the xor function { 0, // 0 xor 0 = 0 (inputs are equal) 1, // 0 xor 1 = 1 (inputs are different) 1, // 1 xor 0 = 1 (inputs are different) 0, // 1 xor 1 = 0 (inputs are equal) }; // Let's use a Gaussian kernel var kernel = new Gaussian(0.1); // Create a pre-computed Gaussian kernel matrix var precomputed = new Precomputed(kernel.ToJagged(inputs)); // Now, we can create the sequential minimal optimization teacher var learn = new SequentialMinimalOptimization<Precomputed, int>() { Kernel = precomputed // set the precomputed kernel we created }; // And then we can obtain the SVM by using Learn var svm = learn.Learn(precomputed.Indices, xor); // Finally, we can obtain the decisions predicted by the machine: bool[] prediction = svm.Decide(precomputed.Indices); // We can also compute the machine prediction to new samples double[][] sample = { new double[] { 0, 1 } }; // Update the precomputed kernel with the new samples precomputed = new Precomputed(kernel.ToJagged2(inputs, sample)); // Update the SVM kernel svm.Kernel = precomputed; // Compute the predictions to the new samples bool[] newPrediction = svm.Decide(precomputed.Indices);

See Also