The notions of observable and hidden are similar to Plato's notions of shadows and forms in the allegory of the cave. The allegory claims that perceived reality is but the shadow thrown into the world of experience of a true reality which is inaccessible to direct sensory experience. `Forms' in the true reality contain the essence of a class of object which can be experienced only incompletely in perceived reality. This analogy is particularly strong when modelling parts of speech and sentences, and other entities which have a strongly defined semantic meaning independent of the myriad of possible representations in the observable sequence.
In a regular Markov model, the state is directly visible to the observer, and therefore the state transition probabilities are the only parameters. A hidden Markov model adds outputs: each state has a probability distribution over the possible output tokens. Therefore, looking at a sequence of tokens generated by an HMM does not directly indicate the sequence of states.
Table of contents |
2 Using Markov Models 3 See also 4 External links |
There are 3 canonical problems to solve with HMMs:
Example (H)MM
Using Markov Models
Applications of hidden Markov models
See also
External links