Click or drag to resize
Accord.NET (logo)

ProbabilisticOutputCalibrationTKernel, TInput Class

Probabilistic Output Calibration for structured Kernel machines.
Inheritance Hierarchy
SystemObject
  Accord.MachineLearningBinaryLearningBaseSupportVectorMachineTKernel, TInput, TInput
    Accord.MachineLearning.VectorMachines.LearningProbabilisticOutputCalibrationBaseSupportVectorMachineTKernel, TInput, TKernel, TInput
      Accord.MachineLearning.VectorMachines.LearningProbabilisticOutputCalibrationTKernel, TInput

Namespace:  Accord.MachineLearning.VectorMachines.Learning
Assembly:  Accord.MachineLearning (in Accord.MachineLearning.dll) Version: 3.8.0
Syntax
public class ProbabilisticOutputCalibration<TKernel, TInput> : ProbabilisticOutputCalibrationBase<SupportVectorMachine<TKernel, TInput>, TKernel, TInput>
where TKernel : Object, IKernel<TInput>
where TInput : ICloneable
Request Example View Source

Type Parameters

TKernel
TInput

The ProbabilisticOutputCalibrationTKernel, TInput type exposes the following members.

Constructors
Properties
  NameDescription
Public propertyIterations
Gets or sets the maximum number of iterations. Default is 100.
(Inherited from ProbabilisticOutputCalibrationBaseTModel, TKernel, TInput.)
Public propertyModel
Gets or sets the classifier being learned.
(Inherited from BinaryLearningBaseTModel, TInput.)
Public propertyStepSize
Gets or sets the minimum step size used during line search. Default is 1e-10.
(Inherited from ProbabilisticOutputCalibrationBaseTModel, TKernel, TInput.)
Public propertyToken
Gets or sets a cancellation token that can be used to stop the learning algorithm while it is running.
(Inherited from BinaryLearningBaseTModel, TInput.)
Public propertyTolerance
Gets or sets the tolerance under which the answer must be found. Default is 1-e5.
(Inherited from ProbabilisticOutputCalibrationBaseTModel, TKernel, TInput.)
Top
Methods
  NameDescription
Public methodEquals
Determines whether the specified object is equal to the current object.
(Inherited from Object.)
Protected methodFinalize
Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection.
(Inherited from Object.)
Public methodGetHashCode
Serves as the default hash function.
(Inherited from Object.)
Public methodGetType
Gets the Type of the current instance.
(Inherited from Object.)
Public methodLearn(TInput, Boolean, Double)
Learns a model that can map the given inputs to the given outputs.
(Inherited from BinaryLearningBaseTModel, TInput.)
Public methodLearn(TInput, Double, Double)
Learns a model that can map the given inputs to the given outputs.
(Inherited from BinaryLearningBaseTModel, TInput.)
Public methodLearn(TInput, Int32, Double)
Learns a model that can map the given inputs to the given outputs.
(Inherited from BinaryLearningBaseTModel, TInput.)
Public methodLearn(TInput, Int32, Double)
Learns a model that can map the given inputs to the given outputs.
(Inherited from BinaryLearningBaseTModel, TInput.)
Public methodLearn(TInput, Boolean, Double)
Learns a model that can map the given inputs to the given outputs.
(Inherited from ProbabilisticOutputCalibrationBaseTModel, TKernel, TInput.)
Public methodLogLikelihood Obsolete.
Obsolete.
(Inherited from ProbabilisticOutputCalibrationBaseTModel, TKernel, TInput.)
Protected methodMemberwiseClone
Creates a shallow copy of the current Object.
(Inherited from Object.)
Public methodRun Obsolete.
Obsolete.
(Inherited from ProbabilisticOutputCalibrationBaseTModel, TKernel, TInput.)
Public methodRun(Boolean) Obsolete.
Obsolete.
(Inherited from ProbabilisticOutputCalibrationBaseTModel, TKernel, TInput.)
Public methodToString
Returns a string that represents the current object.
(Inherited from Object.)
Top
Extension Methods
  NameDescription
Public Extension MethodHasMethod
Checks whether an object implements a method with the given name.
(Defined by ExtensionMethods.)
Public Extension MethodIsEqual
Compares two objects for equality, performing an elementwise comparison if the elements are vectors or matrices.
(Defined by Matrix.)
Public Extension MethodTo(Type)Overloaded.
Converts an object into another type, irrespective of whether the conversion can be done at compile time or not. This can be used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.)
Public Extension MethodToTOverloaded.
Converts an object into another type, irrespective of whether the conversion can be done at compile time or not. This can be used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.)
Top
Remarks

Instead of producing probabilistic outputs, Support Vector Machines express their decisions in the form of a distance from support vectors in feature space. In order to convert the SVM outputs into probabilities, Platt (1999) proposed the calibration of the SVM outputs using a sigmoid (Logit) link function. Later, Lin et al (2007) provided a corrected and improved version of Platt's probabilistic outputs. This class implements the later.

This class is not an actual learning algorithm, but a calibrator. Machines passed as input to this algorithm should already have been trained by a proper learning algorithm such as Sequential Minimal Optimization (SMO).

This class can also be used in combination with MulticlassSupportVectorLearningTKernel or MultilabelSupportVectorLearningTKernel to learn MulticlassSupportVectorMachineTKernels using the one-vs-one or one-vs-all multi-class decision strategies, respectively.

References:

  • John C. Platt. 1999. Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods. In ADVANCES IN LARGE MARGIN CLASSIFIERS (1999), pp. 61-74.
  • Hsuan-Tien Lin, Chih-Jen Lin, and Ruby C. Weng. 2007. A note on Platt's probabilistic outputs for support vector machines. Mach. Learn. 68, 3 (October 2007), 267-276.

Examples

The following example shows how to calibrate a SVM that has been trained to perform a simple XOR function.

double[][] inputs = // Example XOR problem
{
    new double[] { 0, 0 }, // 0 xor 0: 1 (label +1)
    new double[] { 0, 1 }, // 0 xor 1: 0 (label -1)
    new double[] { 1, 0 }, // 1 xor 0: 0 (label -1)
    new double[] { 1, 1 }  // 1 xor 1: 1 (label +1)
};

int[] outputs = // XOR outputs
{
    1, 0, 0, 1
};

// Instantiate a new SMO learning algorithm for SVMs
var smo = new SequentialMinimalOptimization<Gaussian>()
{
    Kernel = new Gaussian(0.1),
    Complexity = 1.0
};

// Learn a SVM using the algorithm
var svm = smo.Learn(inputs, outputs);

// Predict labels for each input sample
bool[] predicted = svm.Decide(inputs);

// Compute classification error
double error = new ZeroOneLoss(outputs).Loss(predicted);

// Instantiate the probabilistic calibration (using Platt's scaling)
var calibration = new ProbabilisticOutputCalibration<Gaussian>(svm);

// Run the calibration algorithm
calibration.Learn(inputs, outputs); // returns the same machine

// Predict probabilities of each input sample
double[] probabilities = svm.Probability(inputs);

// Compute the error based on a hard decision
double loss = new BinaryCrossEntropyLoss(outputs).Loss(probabilities);

// Compute the decision output for one of the input vectors,
// while also retrieving the probability of the answer

bool decision;
double probability = svm.Probability(inputs[0], out decision);

The next example shows how to solve a multi-class problem using a one-vs-one SVM where the binary machines are learned using SMO and calibrated using Platt's scaling.

// Let's say we have the following data to be classified
// into three possible classes. Those are the samples:
// 
double[][] inputs =
{
    //               input         output
    new double[] { 0, 1, 1, 0 }, //  0 
    new double[] { 0, 1, 0, 0 }, //  0
    new double[] { 0, 0, 1, 0 }, //  0
    new double[] { 0, 1, 1, 0 }, //  0
    new double[] { 0, 1, 0, 0 }, //  0
    new double[] { 1, 0, 0, 1 }, //  1
    new double[] { 0, 0, 0, 1 }, //  1
    new double[] { 0, 0, 0, 1 }, //  1
    new double[] { 1, 0, 1, 1 }, //  2
    new double[] { 1, 1, 0, 1 }, //  2
    new double[] { 0, 1, 1, 1 }, //  2
    new double[] { 1, 1, 1, 1 }, //  2
};

int[] outputs = // those are the class labels
{
    0, 0, 0, 0, 0,
    1, 1, 1,
    2, 2, 2, 2,
};

// Create the multi-class learning algorithm for the machine
var teacher = new MulticlassSupportVectorLearning<Gaussian>()
{
    // Configure the learning algorithm to use SMO to train the
    //  underlying SVMs in each of the binary class subproblems.
    Learner = (param) => new SequentialMinimalOptimization<Gaussian>()
    {
        // Estimate a suitable guess for the Gaussian kernel's parameters.
        // This estimate can serve as a starting point for a grid search.
        UseKernelEstimation = true
    }
};

// Learn a machine
var machine = teacher.Learn(inputs, outputs);


// Create the multi-class learning algorithm for the machine
var calibration = new MulticlassSupportVectorLearning<Gaussian>()
{
    Model = machine, // We will start with an existing machine

    // Configure the learning algorithm to use Platt's calibration
    Learner = (param) => new ProbabilisticOutputCalibration<Gaussian>()
    {
        Model = param.Model // Start with an existing machine
    }
};


// Configure parallel execution options
calibration.ParallelOptions.MaxDegreeOfParallelism = 1;

// Learn a machine
calibration.Learn(inputs, outputs);

// Obtain class predictions for each sample
int[] predicted = machine.Decide(inputs);

// Get class scores for each sample
double[] scores = machine.Score(inputs);

// Get log-likelihoods (should be same as scores)
double[][] logl = machine.LogLikelihoods(inputs);

// Get probability for each sample
double[][] prob = machine.Probabilities(inputs);

// Compute classification error
double error = new ZeroOneLoss(outputs).Loss(predicted);
double loss = new CategoryCrossEntropyLoss(outputs).Loss(prob);
See Also