ViterbiLearningTDistribution, TObservation Class |
Namespace: Accord.Statistics.Models.Markov.Learning
public class ViterbiLearning<TDistribution, TObservation> : BaseViterbiLearning<TObservation[]>, IUnsupervisedLearning<HiddenMarkovModel<TDistribution, TObservation>, TObservation[], int[]> where TDistribution : Object, IFittableDistribution<TObservation>
The ViterbiLearningTDistribution, TObservation type exposes the following members.
Name | Description | |
---|---|---|
ViterbiLearningTDistribution, TObservation |
Creates a new instance of the Viterbi learning algorithm.
|
Name | Description | |
---|---|---|
Batches |
Gets or sets on how many batches the learning data should be divided during learning.
Batches are used to estimate adequately the first models so they can better compute
the Viterbi paths for subsequent passes of the algorithm. Default is 1.
(Inherited from BaseViterbiLearningT.) | |
CurrentIteration |
Gets the current iteration.
(Inherited from BaseViterbiLearningT.) | |
FittingOptions |
Gets or sets the distribution fitting options
to use when estimating distribution densities
during learning.
| |
HasConverged |
Gets a value indicating whether this instance has converged.
(Inherited from BaseViterbiLearningT.) | |
Iterations | Obsolete.
Please use MaxIterations instead.
(Inherited from BaseViterbiLearningT.) | |
MaxIterations |
Gets or sets the maximum number of iterations
performed by the learning algorithm.
(Inherited from BaseViterbiLearningT.) | |
Model |
Gets the model being trained.
| |
Token |
Gets or sets a cancellation token that can be used
to cancel the algorithm while it is running.
(Overrides BaseViterbiLearningTToken.) | |
Tolerance |
Gets or sets the maximum change in the average log-likelihood
after an iteration of the algorithm used to detect convergence.
(Inherited from BaseViterbiLearningT.) | |
UseLaplaceRule |
Gets or sets whether to use Laplace's rule
of succession to avoid zero probabilities.
|
Name | Description | |
---|---|---|
ComputeLogLikelihood |
Computes the log-likelihood for the current model for the given observations.
(Overrides BaseViterbiLearningTComputeLogLikelihood(T).) | |
Equals | Determines whether the specified object is equal to the current object. (Inherited from Object.) | |
Finalize | Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. (Inherited from Object.) | |
GetHashCode | Serves as the default hash function. (Inherited from Object.) | |
GetType | Gets the Type of the current instance. (Inherited from Object.) | |
Learn |
Learns a model that can map the given inputs to the desired outputs.
| |
MemberwiseClone | Creates a shallow copy of the current Object. (Inherited from Object.) | |
Run |
Runs the learning algorithm.
(Inherited from BaseViterbiLearningT.) | |
RunEpoch |
Runs one single epoch (iteration) of the learning algorithm.
(Overrides BaseViterbiLearningTRunEpoch(T, Int32).) | |
ToString | Returns a string that represents the current object. (Inherited from Object.) |
Name | Description | |
---|---|---|
HasMethod |
Checks whether an object implements a method with the given name.
(Defined by ExtensionMethods.) | |
IsEqual |
Compares two objects for equality, performing an elementwise
comparison if the elements are vectors or matrices.
(Defined by Matrix.) | |
To(Type) | Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.) | |
ToT | Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.) |
The Viterbi learning algorithm is an alternate learning algorithms for hidden Markov models. It works by obtaining the Viterbi path for the set of training observation sequences and then computing the maximum likelihood estimates for the model parameters. Those operations are repeated iteratively until model convergence.
The Viterbi learning algorithm is also known as the Segmental K-Means algorithm.
Accord.Math.Random.Generator.Seed = 0; // Create continuous sequences. In the sequences below, there // seems to be two states, one for values between 0 and 1 and // another for values between 5 and 7. The states seems to be // switched on every observation. double[][] sequences = new double[][] { new double[] { 0.1, 5.2, 0.3, 6.7, 0.1, 6.0 }, new double[] { 0.2, 6.2, 0.3, 6.3, 0.1, 5.0 }, new double[] { 0.1, 7.0, 0.1, 7.0, 0.2, 5.6 }, }; // Specify a initial normal distribution var density = new NormalDistribution(); // Create a continuous hidden Markov Model with two states organized in a forward // topology and an underlying univariate Normal distribution as probability density. var model = new HiddenMarkovModel<NormalDistribution, double>(new Forward(2), density); // Configure the learning algorithms to train the sequence classifier until the // difference in the average log-likelihood changes only by as little as 0.0001 var teacher = new ViterbiLearning<NormalDistribution, double>(model) { Tolerance = 0.0001, Iterations = 0, }; // Fit the model teacher.Learn(sequences); // See the probability of the sequences learned double a1 = model.LogLikelihood(new[] { 0.1, 5.2, 0.3, 6.7, 0.1, 6.0 }); // log(0.40) double a2 = model.LogLikelihood(new[] { 0.2, 6.2, 0.3, 6.3, 0.1, 5.0 }); // log(0.46) // See the probability of an unrelated sequence double a3 = model.LogLikelihood(new[] { 1.1, 2.2, 1.3, 3.2, 4.2, 1.0 }); // log(1.42)