Chapter Summary
Chapter 17 Summary: Discrete-Time Markov Chains
Key Points
- 1.
A discrete-time Markov chain is a sequence of random variables satisfying the Markov property: the future depends on the past only through the present state.
- 2.
The chain is fully characterized by its transition matrix , a row-stochastic matrix. The -step transition probabilities are given by , as established by the Chapman-Kolmogorov equation.
- 3.
States are classified as accessible, communicating, recurrent, or transient. Recurrence and transience are class properties: in an irreducible chain, all states share the same classification.
- 4.
The period of a state is . A chain with period 1 is aperiodic. Any self-loop in an irreducible chain guarantees aperiodicity.
- 5.
An irreducible, positive recurrent chain has a unique stationary distribution with . For finite irreducible chains, positive recurrence is automatic.
- 6.
The convergence theorem states that for ergodic chains (irreducible, aperiodic, positive recurrent), as : the chain forgets its starting state.
- 7.
Detailed balance () implies stationarity and characterizes reversible chains. Birth-death chains are always reversible. The Metropolis-Hastings algorithm constructs reversible chains to sample from any target distribution.
- 8.
Applications in communications include the Gilbert-Elliott channel model (bursty errors), ARQ protocols (retransmission analysis), slotted ALOHA (random access stability), and age of information (freshness of status updates).
Looking Ahead
Chapter 18 extends the Markov framework to continuous time: continuous-time Markov chains (CTMCs) and the Poisson process. The key new ingredient is that the holding time in each state is exponentially distributed. CTMCs model arrival processes, birth-death queues, and channel state dynamics at the millisecond timescale relevant to wireless systems.