Finding the stationary distribution of a Markov chain : Exercises
Introduction
Example problems
Click on the problems to reveal the solution
Problem 1
The chain has a limiting stationary distribution if in the long time limit there probability of being in any given state in the future does not depend on the starting state. This is true for $0 \lt p \le \frac{1}{4}$. It is not true for $p=0$ as say we start in state 0. We are guaranteed to finish in state 0 as state 0 is only connected to itself. This is also true for states 1 and 2 - as when $p=0$ the matrix is diagonal. The state we end up in the long time limit thus depends on the intial state and there is no limiting stationary distribution.
There are two ways of proceedinghere:
(1) Gaussian Elimination:
We want to find the solution, $\pi$ of the following matrix equation: $$ \pi \mathbf{A} = \pi \qquad \rightarrow \qquad [\mathbf{A}^T - \mathbf{I}] \pi = 0 $$ We do this by Gaussian elimination as follows. First we reduce the matrix to upper triangular form as shown below: $$ [\mathbf{A}^T - \mathbf{I}] = \left( \begin{matrix} -4p & 2p & 0 \\ 4p & -4p & 2p \\ 0 & 2p & -2p \end{matrix} \right) $$ Adding row (1) to row (2) gives: $$ [\mathbf{A}^T - \mathbf{I}] = \left( \begin{matrix} -4p & 2p & 0 \\ 0 & -2p & 2p \\ 0 & 2p & -2p \end{matrix} \right) $$ And now adding row (2) to row (3) gives the an upper triangular matrix: $$ [\mathbf{A}^T - \mathbf{I}] = \left( \begin{matrix} -4p & 2p & 0 \\ 0 & -2p & 2p \\ 0 & 0 & 0 \end{matrix} \right) $$ We can thus write that: $$ [\mathbf{A}^T - \mathbf{I}] \pi = \left( \begin{matrix} -4p & 2p & 0 \\ 0 & -2p & 2p \\ 0 & 0 & 0 \end{matrix} \right) \left( \begin{matrix} \pi_1 \\ \pi_2 \\ \pi_3 \end{matrix} \right) = 0 \qquad \rightarrow \qquad \begin{matrix} -4p \pi_1 + 2p \pi_2 = 0 \\ -2p \pi_2 + 2p\pi_3 = 0 \end{matrix} \qquad \rightarrow \qquad \begin{matrix} 2 \pi_1 = \pi_2 \\ \pi_2 = \pi_3 \end{matrix} $$
(2) Detailed Balance:
We can arrive at the two simultaneous equations above via an alternative route if we use the detailed balance condition. Remember this states the elements of the transition matrix, $\mathbf{A}$, and the elements of the limiting stationary distribution, $\pi$, are related by: $$ \pi_i A_{ij} = \pi_j A_{ji} $$ With our transition matrix: $$ \mathbf{A} = \left( \begin{matrix} 1 - 4p & 4p & 0 \\ 2p & 1 - 4p & 2p \\ 0 & 2p & 1 - 2p \end{matrix} \right) $$ We thus have for states 1 and 2: $\qquad \pi_1 4p = \pi_2 2p \qquad \rightarrow \qquad 2\pi_1 = \pi_2$
And for states 2 and 3: $\qquad \pi_2 2p = \pi_3 2p \qquad \rightarrow \qquad \pi_2 = \pi_3$
The end of the problem
Both the methods described above (Gaussian elimination and Detailed balance) got us to the following equations: $$ 2\pi_1 = \pi_2 \qquad \textrm{and} \qquad \pi_2 = \pi_3 $$ We thus have two simultaneous equations and two unknowns. We cannot solve these equations. We know, however, that $\pi$ is a vector of probabilities and that as such this vector must be normalized so that: $$ \pi_1 + \pi_2 + \pi_3 = 1 $$ We can thus write: $$ 1= \pi_1 + \pi_2 + \pi_3 = \pi_1 + 2 \pi_1 + 2\pi_1 = 5\pi_1 \qquad \rightarrow \qquad \pi_1 = \frac{1}{5} $$ Inserting this equation into the equations above gives: $$ \pi_1 = \frac{1}{5} \qquad \pi_2 = \frac{2}{5} \qquad \pi_3 = \frac{2}{5} $$