BaumWelchLearning Class 
Namespace: Accord.Statistics.Models.Markov.Learning
public class BaumWelchLearning : BaseBaumWelchLearning<HiddenMarkovModel, GeneralDiscreteDistribution, int, GeneralDiscreteOptions>
The BaumWelchLearning type exposes the following members.
Name  Description  

BaumWelchLearning 
Creates a new instance of the BaumWelch learning algorithm.

Name  Description  

FittingOptions 
Gets or sets the distribution fitting options
to use when estimating distribution densities
during learning.
(Inherited from BaseBaumWelchLearningTModel, TDistribution, TObservation, TOptions.)  
Iterations 
Gets or sets the maximum number of iterations
performed by the learning algorithm.
(Inherited from BaseBaumWelchLearningTModel, TDistribution, TObservation, TOptions.)  
LogGamma 
Gets the Gamma matrix of log probabilities created during
the last iteration of the BaumWelch learning algorithm.
(Inherited from BaseBaumWelchLearningTModel, TDistribution, TObservation, TOptions.)  
LogKsi 
Gets the Ksi matrix of log probabilities created during
the last iteration of the BaumWelch learning algorithm.
(Inherited from BaseBaumWelchLearningTModel, TDistribution, TObservation, TOptions.)  
LogLikelihood 
Gets the loglikelihood of the model at the last iteration.
(Inherited from BaseBaumWelchLearningTModel, TDistribution, TObservation, TOptions.)  
LogWeights 
Gets the sample weights in the last iteration of the
BaumWelch learning algorithm.
(Inherited from BaseBaumWelchLearningTModel, TDistribution, TObservation, TOptions.)  
Model 
Gets the model being trained.
(Inherited from BaseHiddenMarkovModelLearningTModel, TObservation.)  
NumberOfStates 
Gets or sets the number of states in the models to be learned.
(Inherited from BaseBaumWelchLearningTModel, TDistribution, TObservation, TOptions.)  
Observations 
Gets all observations as a single vector.
(Inherited from BaseBaumWelchLearningTModel, TDistribution, TObservation, TOptions.)  
ParallelOptions 
Gets or sets the parallelization options for this algorithm.
(Inherited from ParallelLearningBase.)  
Token 
Gets or sets a cancellation token that can be used
to cancel the algorithm while it is running.
(Inherited from ParallelLearningBase.)  
Tolerance 
Gets or sets the maximum change in the average loglikelihood
after an iteration of the algorithm used to detect convergence.
(Inherited from BaseBaumWelchLearningTModel, TDistribution, TObservation, TOptions.)  
Topology 
Gets or sets the statetransition topology to be used.
Default is Ergodic.
(Inherited from BaseBaumWelchLearningTModel, TDistribution, TObservation, TOptions.) 
Name  Description  

ComputeForwardBackward 
Computes the forward and backward probabilities matrices
for a given observation referenced by its index in the
input training data.
(Inherited from BaseBaumWelchLearningTModel, TDistribution, TObservation, TOptions.)  
ComputeKsi 
Computes the ksi matrix of probabilities for a given observation
referenced by its index in the input training data.
(Inherited from BaseBaumWelchLearningTModel, TDistribution, TObservation, TOptions.)  
Equals  Determines whether the specified object is equal to the current object. (Inherited from Object.)  
Finalize  Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. (Inherited from Object.)  
Fit 
Fits one emission distribution. This method can be override in a
base class in order to implement special fitting options.
(Inherited from BaseBaumWelchLearningTModel, TDistribution, TObservation, TOptions.)  
FromMixtureModel(HiddenMarkovModelMixtureNormalDistribution, NormalOptions) 
Creates a BaumWelch with default configurations for
hidden Markov models with normal mixture densities.
 
FromMixtureModel(HiddenMarkovModelMultivariateMixtureMultivariateNormalDistribution, NormalOptions) 
Creates a BaumWelch with default configurations for
hidden Markov models with normal mixture densities.
 
GetHashCode  Serves as the default hash function. (Inherited from Object.)  
GetType  Gets the Type of the current instance. (Inherited from Object.)  
Learn 
Learns a model that can map the given inputs to the desired outputs.
(Inherited from BaseBaumWelchLearningTModel, TDistribution, TObservation, TOptions.)  
MemberwiseClone  Creates a shallow copy of the current Object. (Inherited from Object.)  
Run(Int32)  Obsolete.
Obsolete.
 
Run(Int32)  Obsolete.
Obsolete.
 
ToString  Returns a string that represents the current object. (Inherited from Object.)  
UpdateEmissions 
Updates the emission probability matrix.
(Inherited from BaseBaumWelchLearningTModel, TDistribution, TObservation, TOptions.) 
Name  Description  

HasMethod 
Checks whether an object implements a method with the given name.
(Defined by ExtensionMethods.)  
IsEqual  Compares two objects for equality, performing an elementwise comparison if the elements are vectors or matrices. (Defined by Matrix.)  
ToT  Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by ExtensionMethods.)  
ToT  Overloaded.
Converts an object into another type, irrespective of whether
the conversion can be done at compile time or not. This can be
used to convert generic types to numeric types during runtime.
(Defined by Matrix.) 
The BaumWelch algorithm is an unsupervised algorithm used to learn a single hidden Markov model object from a set of observation sequences. It works by using a variant of the ExpectationMaximization algorithm to search a set of model parameters (i.e. the matrix of transition probabilities A, the matrix of emission probabilities B, and the initial probability vector π) that would result in a model having a high likelihood of being able to generate a set of training sequences given to this algorithm.
For increased accuracy, this class performs all computations using logprobabilities.
For a more thorough explanation on hidden Markov models with practical examples on gesture recognition, please see Sequence Classifiers in C#, Part I: Hidden Markov Models [1].
[1]: http://www.codeproject.com/Articles/541428/SequenceClassifiersinCsharpPartIHiddenMarko
// We will create a Hidden Markov Model to detect // whether a given sequence starts with a zero. int[][] sequences = new int[][] { new int[] { 0,1,1,1,1,0,1,1,1,1 }, new int[] { 0,1,1,1,0,1,1,1,1,1 }, new int[] { 0,1,1,1,1,1,1,1,1,1 }, new int[] { 0,1,1,1,1,1 }, new int[] { 0,1,1,1,1,1,1 }, new int[] { 0,1,1,1,1,1,1,1,1,1 }, new int[] { 0,1,1,1,1,1,1,1,1,1 }, }; // Create a new Hidden Markov Model with 3 states for // an output alphabet of two characters (zero and one) var hmm = new HiddenMarkovModel(states: 3, symbols: 2); // Create the learning algorithm var teacher = new BaumWelchLearning(hmm) { Tolerance = 0.0001, // until loglikelihood changes less than 0.0001 Iterations = 0 // and use as many iterations as needed }; // Estimate the model teacher.Learn(sequences); // Now we can calculate the probability that the given // sequences originated from the model. We can compute // those probabilities using the Viterbi algorithm: double vl1; hmm.Decode(new int[] { 0, 1 }, out vl1); // 0.69317855 double vl2; hmm.Decode(new int[] { 0, 1, 1, 1 }, out vl2); // 2.16644878 // Sequences which do not start with zero have much lesser probability. double vl3; hmm.Decode(new int[] { 1, 1 }, out vl3); // 11.3580034 double vl4; hmm.Decode(new int[] { 1, 0, 0, 0 }, out vl4); // 38.6759130 // Sequences which contains few errors have higher probability // than the ones which do not start with zero. This shows some // of the temporal elasticity and error tolerance of the HMMs. double vl5; hmm.Decode(new int[] { 0, 1, 0, 1, 1, 1, 1, 1, 1 }, out vl5); // 8.22665 double vl6; hmm.Decode(new int[] { 0, 1, 1, 1, 1, 1, 1, 0, 1 }, out vl6); // 8.22665 // Additionally, we can also compute the probability // of those sequences using the forward algorithm: double fl1 = hmm.LogLikelihood(new int[] { 0, 1 }); // 0.000031369 double fl2 = hmm.LogLikelihood(new int[] { 0, 1, 1, 1 }); // 0.087005121 // Sequences which do not start with zero have much lesser probability. double fl3 = hmm.LogLikelihood(new int[] { 1, 1 }); // 10.66485629 double fl4 = hmm.LogLikelihood(new int[] { 1, 0, 0, 0 }); // 36.61788687 // Sequences which contains few errors have higher probability // than the ones which do not start with zero. This shows some // of the temporal elasticity and error tolerance of the HMMs. double fl5 = hmm.LogLikelihood(new int[] { 0, 1, 0, 1, 1, 1, 1, 1, 1 }); // 3.3744416 double fl6 = hmm.LogLikelihood(new int[] { 0, 1, 1, 1, 1, 1, 1, 0, 1 }); // 3.3744416