MultilabelSupportVectorLearningTKernel, TInput Class |
Namespace: Accord.MachineLearning.VectorMachines.Learning
public class MultilabelSupportVectorLearning<TKernel, TInput> : BaseMultilabelSupportVectorLearning<TInput, SupportVectorMachine<TKernel, TInput>, TKernel, MultilabelSupportVectorMachine<TKernel, TInput>> where TKernel : Object, IKernel<TInput> where TInput : ICloneable
The MultilabelSupportVectorLearningTKernel, TInput type exposes the following members.
Name | Description | |
---|---|---|
MultilabelSupportVectorLearningTKernel, TInput |
Initializes a new instance of the MultilabelSupportVectorLearningTKernel, TInput class.
| |
MultilabelSupportVectorLearningTKernel, TInput(MultilabelSupportVectorMachineTKernel, TInput) |
Initializes a new instance of the MultilabelSupportVectorLearningTKernel, TInput class.
|
Name | Description | |
---|---|---|
AggregateExceptions |
Gets or sets a value indicating whether the entire training algorithm should stop
in case an exception has been detected at just one of the inner binary learning
problems. Default is true (execution will not be stopped).
(Inherited from OneVsRestLearningTInput, TBinary, TModel.) | |
IsMultilabel |
Gets or sets a value indicating whether the learning algorithm should generate multi-label
(as opposed to multi-class) models. If left unspecified, the type of the model will be determined
automatically depending on which overload of the Learn(TInput, Boolean, Double)
method will be called first by the executing code.
(Inherited from OneVsRestLearningTInput, TBinary, TModel.) | |
Kernel |
Gets or sets the kernel function to be used to learn the
kernel support
vector machines.
(Inherited from BaseMultilabelSupportVectorLearningTInput, TBinary, TKernel, TModel.) | |
Learner |
Gets or sets a function that takes a set of parameters and creates
a learning algorithm for learning each of the binary inner classifiers
needed by the one-vs-rest classification strategy.
(Inherited from OneVsRestLearningTInput, TBinary, TModel.) | |
Model |
Gets or sets the model being learned.
(Inherited from OneVsRestLearningTInput, TBinary, TModel.) | |
ParallelOptions |
Gets or sets the parallelization options for this algorithm.
(Inherited from ParallelLearningBase.) | |
Token |
Gets or sets a cancellation token that can be used
to cancel the algorithm while it is running.
(Inherited from ParallelLearningBase.) |
Name | Description | |
---|---|---|
ConfigureTResult(FuncTResult) |
Sets a callback function that takes a set of parameters and creates
a learning algorithm for learning each of the binary inner classifiers
needed by the one-vs-rest classification strategy. Calling this method
sets the Learner property.
(Inherited from OneVsRestLearningTInput, TBinary, TModel.) | |
ConfigureT, TResult(FuncT, TResult) |
Sets a callback function that takes a set of parameters and creates
a learning algorithm for learning each of the binary inner classifiers
needed by the one-vs-rest classification strategy. Calling this method
sets the Learner property.
(Inherited from OneVsRestLearningTInput, TBinary, TModel.) | |
Create |
Creates an instance of the model to be learned. Inheritors
of this abstract class must define this method so new models
can be created from the training data.
(Overrides OneVsRestLearningTInput, TBinary, TModelCreate(Int32, Int32, Boolean).) | |
Equals | Determines whether the specified object is equal to the current object. (Inherited from Object.) | |
Finalize | Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. (Inherited from Object.) | |
GetHashCode | Serves as the default hash function. (Inherited from Object.) | |
GetType | Gets the Type of the current instance. (Inherited from Object.) | |
Learn(TInput, Boolean, Double) |
Learns a model that can map the given inputs to the given outputs.
(Inherited from OneVsRestLearningTInput, TBinary, TModel.) | |
Learn(TInput, Int32, Double) |
Learns a model that can map the given inputs to the given outputs.
(Inherited from OneVsRestLearningTInput, TBinary, TModel.) | |
Learn(TInput, Int32, Double) |
Learns a model that can map the given inputs to the given outputs.
(Inherited from OneVsRestLearningTInput, TBinary, TModel.) | |
MemberwiseClone | Creates a shallow copy of the current Object. (Inherited from Object.) | |
OnSubproblemFinished |
Raises the [E:SubproblemFinished] event.
(Inherited from OneVsRestLearningTInput, TBinary, TModel.) | |
OnSubproblemStarted |
Raises the [E:SubproblemStarted] event.
(Inherited from OneVsRestLearningTInput, TBinary, TModel.) | |
ToString | Returns a string that represents the current object. (Inherited from Object.) |
Name | Description | |
---|---|---|
SubproblemFinished |
Occurs when the learning of a subproblem has finished.
(Inherited from OneVsRestLearningTInput, TBinary, TModel.) | |
SubproblemStarted |
Occurs when the learning of a subproblem has started.
(Inherited from OneVsRestLearningTInput, TBinary, TModel.) |
Name | Description | |
---|---|---|
HasMethod |
Checks whether an object implements a method with the given name.
(Defined by ExtensionMethods.) | |
IsEqual |
Compares two objects for equality, performing an elementwise
comparison if the elements are vectors or matrices.
(Defined by Matrix.) | |
To(Type) | Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.) | |
ToT | Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.) |
This class can be used to train Kernel Support Vector Machines with any algorithm using a one-against-all strategy. The underlying training algorithm can be configured by defining the Learner property.
One example of learning algorithm that can be used with this class is the Sequential Minimal Optimization (SMO) algorithm.
The following example shows how to learn a linear, multi-label (one-vs-rest) support vector machine using the LinearDualCoordinateDescent algorithm.
// Let's say we have the following data to be classified // into three possible classes. Those are the samples: // double[][] inputs = { // input output new double[] { 0, 1, 1, 0 }, // 0 new double[] { 0, 1, 0, 0 }, // 0 new double[] { 0, 0, 1, 0 }, // 0 new double[] { 0, 1, 1, 0 }, // 0 new double[] { 0, 1, 0, 0 }, // 0 new double[] { 1, 0, 0, 0 }, // 1 new double[] { 1, 0, 0, 0 }, // 1 new double[] { 1, 0, 0, 1 }, // 1 new double[] { 0, 0, 0, 1 }, // 1 new double[] { 0, 0, 0, 1 }, // 1 new double[] { 1, 1, 1, 1 }, // 2 new double[] { 1, 0, 1, 1 }, // 2 new double[] { 1, 1, 0, 1 }, // 2 new double[] { 0, 1, 1, 1 }, // 2 new double[] { 1, 1, 1, 1 }, // 2 }; int[] outputs = // those are the class labels { 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, }; // Create a one-vs-one multi-class SVM learning algorithm var teacher = new MultilabelSupportVectorLearning<Linear>() { // using LIBLINEAR's L2-loss SVC dual for each SVM Learner = (p) => new LinearDualCoordinateDescent() { Loss = Loss.L2 } }; // The following line is only needed to ensure reproducible results. Please remove it to enable full parallelization teacher.ParallelOptions.MaxDegreeOfParallelism = 1; // (Remove, comment, or change this line to enable full parallelism) // Learn a machine var machine = teacher.Learn(inputs, outputs); // Obtain class predictions for each sample bool[][] predicted = machine.Decide(inputs); // Compute classification error using mean accuracy (mAcc) double error = new HammingLoss(outputs).Loss(predicted);
The following example shows how to learn a non-linear, multi-label (one-vs-rest) support vector machine using the Gaussian kernel and the SequentialMinimalOptimizationTKernel algorithm.
// Let's say we have the following data to be classified // into three possible classes. Those are the samples: // double[][] inputs = { // input output new double[] { 0, 1, 1, 0 }, // 0 new double[] { 0, 1, 0, 0 }, // 0 new double[] { 0, 0, 1, 0 }, // 0 new double[] { 0, 1, 1, 0 }, // 0 new double[] { 0, 1, 0, 0 }, // 0 new double[] { 1, 0, 0, 0 }, // 1 new double[] { 1, 0, 0, 0 }, // 1 new double[] { 1, 0, 0, 1 }, // 1 new double[] { 0, 0, 0, 1 }, // 1 new double[] { 0, 0, 0, 1 }, // 1 new double[] { 1, 1, 1, 1 }, // 2 new double[] { 1, 0, 1, 1 }, // 2 new double[] { 1, 1, 0, 1 }, // 2 new double[] { 0, 1, 1, 1 }, // 2 new double[] { 1, 1, 1, 1 }, // 2 }; int[] outputs = // those are the class labels { 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, }; // Create the multi-class learning algorithm for the machine var teacher = new MulticlassSupportVectorLearning<Gaussian>() { // Configure the learning algorithm to use SMO to train the // underlying SVMs in each of the binary class subproblems. Learner = (param) => new SequentialMinimalOptimization<Gaussian>() { // Estimate a suitable guess for the Gaussian kernel's parameters. // This estimate can serve as a starting point for a grid search. UseKernelEstimation = true } }; // The following line is only needed to ensure reproducible results. Please remove it to enable full parallelization teacher.ParallelOptions.MaxDegreeOfParallelism = 1; // (Remove, comment, or change this line to enable full parallelism) // Learn a machine var machine = teacher.Learn(inputs, outputs); // Obtain class predictions for each sample int[] predicted = machine.Decide(inputs); // Get class scores for each sample double[] scores = machine.Score(inputs); // Compute classification error double error = new ZeroOneLoss(outputs).Loss(predicted);
Support vector machines can have their weights calibrated in order to produce probability estimates (instead of simple class separation distances). The following example shows how to use ProbabilisticOutputCalibration within MulticlassSupportVectorLearningTKernel to generate a probabilistic SVM:
// Let's say we have the following data to be classified // into three possible classes. Those are the samples: // double[][] inputs = { // input output new double[] { 0, 1, 1, 0 }, // 0 new double[] { 0, 1, 0, 0 }, // 0 new double[] { 0, 0, 1, 0 }, // 0 new double[] { 0, 1, 1, 0 }, // 0 new double[] { 0, 1, 0, 0 }, // 0 new double[] { 1, 0, 0, 1 }, // 1 new double[] { 0, 0, 0, 1 }, // 1 new double[] { 0, 0, 0, 1 }, // 1 new double[] { 1, 0, 1, 1 }, // 2 new double[] { 1, 1, 0, 1 }, // 2 new double[] { 0, 1, 1, 1 }, // 2 new double[] { 1, 1, 1, 1 }, // 2 }; int[] outputs = // those are the class labels { 0, 0, 0, 0, 0, 1, 1, 1, 2, 2, 2, 2, }; // Create the multi-class learning algorithm for the machine var teacher = new MultilabelSupportVectorLearning<Gaussian>() { // Configure the learning algorithm to use SMO to train the // underlying SVMs in each of the binary class subproblems. Learner = (param) => new SequentialMinimalOptimization<Gaussian>() { // Estimate a suitable guess for the Gaussian kernel's parameters. // This estimate can serve as a starting point for a grid search. UseKernelEstimation = true } }; // Learn a machine var machine = teacher.Learn(inputs, outputs); // Create the multi-class learning algorithm for the machine var calibration = new MultilabelSupportVectorLearning<Gaussian>() { Model = machine, // We will start with an existing machine // Configure the learning algorithm to use SMO to train the // underlying SVMs in each of the binary class subproblems. Learner = (param) => new ProbabilisticOutputCalibration<Gaussian>() { Model = param.Model // Start with an existing machine } }; // Configure parallel execution options calibration.ParallelOptions.MaxDegreeOfParallelism = 1; // Learn a machine calibration.Learn(inputs, outputs); // Obtain class predictions for each sample bool[][] predicted = machine.Decide(inputs); // Get class scores for each sample double[][] scores = machine.Scores(inputs); // Get log-likelihoods (should be same as scores) double[][] logl = machine.LogLikelihoods(inputs); // Get probability for each sample double[][] prob = machine.Probabilities(inputs); // Compute classification error using mean accuracy (mAcc) double error = new HammingLoss(outputs).Loss(predicted); double loss = new CategoryCrossEntropyLoss(outputs).Loss(prob);