Skip to main content

Table 1 Explanation of the elements of an HMM

From: A graph-based big data optimization approach using hidden Markov model and constraint satisfaction problem

Element

Description

\(\lambda \)

DHMM model, \(\lambda = (A, B, \Pi )\) or CHMM model, \(\lambda = (A, c_{jm}, \mu _{jm}, \Sigma _{jm}, \Pi )\).

S

The state vectors of the HMM \(S = \{s_{1}, s_{2}, ..., s_{N}\}\), (N states).

V

the observation \(V = \{v_{1}, v_{2}, ..., v_{M} \}\), (M observations).

O

The observation sequence \(O = \{o_{1}, o_{2}, ..., o_{T} \}\).

Q

The hidden state sequence \(Q = \{q_{1}, q_{2}, ..., q_{T} \}\), \(q_{t} \in S\).

A

The transition matrix \(A=\{a_{ij} \}\), \(a_{ij}=P(q_{t+1}=s_{j} \mid q_{t}=s_{i})\), where \(1\le i,j \le N\). \(q_{t}\) is the state at time t. \(a_{ij}\) is the transition probability from state \(s_{i}\) to state \(s_{j}\). For every state \(s_{i}\), \(\sum _{j=1}^N a_{ij} =1\) and \(a_{ij}\ge 0\).

B

The observation matrix \(B = \{b_{j}(o_{t})\}\), \(b_{j}(o_{t})=P(o_{t} \mid q_{t}=s_{j})\) is the probability of the \(t^{th}\) observation, which is observed in state \(s_{j}\). For continuous observation, \(b_{j}(o_{t})\) is a probability density function (pdf) or a mixture of continuous pdfs. \(o_{t}\) is the observation feature vector recorded at time t. \(b_{j}(o_{t})=\sum _{m=1}^{M} c_{jm}N(o_{t}, \mu _{jm},\Sigma _{jm})\), where \(N(O, \mu _{jm},\Sigma _{jm})=\sum _{m=1}^{M} c_{jm} \frac{1}{((2\pi )^{d}|\Sigma _{jm}|)^{1/2}}\exp \left( -\frac{1}{2}(o_{t}-\mu _{jm}) \Sigma _{jm}^{-1}(o_{t}-\mu _{jm})^T \right) \). M is the total number of mixtures and d is the dimension of \(o_{t}\). For every state \(s_{j}\), \(\sum _{t=1}^T b_{j}(o_{t}) =1\).

\(\Pi \)

The stochastic initial distribution vector \(\Pi =\{\pi _{i} \}\), where \(\pi _{i}\) is the probability of \(s_{i}\) being the first state of a state sequence. \(\pi _{i} = P(q_{1}=s_{i})\), \(1\le i \le N\). \(\sum _{i=1}^N \pi _{i} =1\).

\(P(O\mid \lambda )\)

The probability that a given sequence of observations \(O = \{o_{1}, o_{2}, ..., o_{T} \}\) are generated by a model \(\lambda \) with a given HMM.

\(\alpha _t(i)\)

forward variable, defined as \(\alpha _t(i)=P(o_{1}, o_{2}, ..., o_{t},q_{t}=s_{i} \mid \lambda )\).

\(\beta _t(i)\)

Backward variable, defined as \(\beta _{t}(i)=P(o_{t+1}o_{t+2}...o_{T}\mid q_{t}=s_{i},\lambda )\).

\(\gamma _t(i)\)

The probability of being in state \(s_{i}\) at time t given \(\lambda \) and O. \(\gamma _t(i)=P(q_{t}=s_{i} \mid O, \lambda )\).

\(\xi _t(i,j)\)

The probability of being in state \(s_{i}\) at time t and in state \(s_{j}\) at time \(t+1\) given the model parameter \(\lambda \) and the observation sequence O. \(\xi _{t}(i,j)=P(q_{t}=s_{i},q_{t+1}=s_{j} \mid O, \lambda )\).

\(\gamma _t(j,m)\)

The probability that given the model parameter \(\lambda \), the observation \(o_{t}\) is generated from state \(s_{j}\) and accounted for by the \(m^{th}\) component of the Gaussian mixture density of state \(s_{j}\).

\(\mu _{jm}\)

The mean of the \(m^{th}\) mixture in state \(s_{j}\).

\(\Sigma _{jm}\)

The covariance matrix of the \(m^{th}\) mixture in state \(s_{j}\).

\(c_{jm}\)

\(m^{th}\) Mixture weights in state \(s_{j}\), where \(\sum _{m=1}^{M} c_{jm}=1\) and \(c_{jm}\ge 0\).

\(\delta _t(i)\)

The likelihood score of the optimal (most likely) sequence of hidden states of length t (ending in state \(s_{i}\)) that produce the first t observations for the given model. \(\delta _{t}(i)=\underset{q_{1},q_{2},...,q_{t-1}}{\max }P(q_{1},q_{2},...,q_{t-1},q_{t}=s_{i},o_{1},o_{2},...,o_{t-1}\mid \lambda )\).

\(\psi _t(i)\)

The array of back pointers that stores the node of the incoming arc that led to this most probable path.