Prerequisites & Notation

Prerequisites

This chapter introduces discrete-time Markov chains (DTMCs), building on your knowledge of conditional probability, discrete random variables, and basic linear algebra. The Markov property reduces the complexity of multi-step dependence to a single-step transition; the resulting theory is both elegant and computationally tractable.

  • Conditional probability and Bayes' theorem(Review fsp/ch02)

    Self-check: Can you compute P(AB)\mathbb{P}(A \mid B) and apply the law of total probability?

  • Discrete random variables, PMFs, and expectation(Review fsp/ch05)

    Self-check: Can you compute E[X]\mathbb{E}[X] for a discrete random variable with a given PMF?

  • Matrix multiplication, eigenvalues, and left eigenvectors

    Self-check: Can you multiply two 3×33 \times 3 matrices and find eigenvectors of a given matrix?

  • Geometric series and convergence

    Self-check: Do you know when n=0rn\sum_{n=0}^{\infty} r^n converges and what it equals?

Chapter Notation

We introduce notation specific to discrete-time Markov chains. The state space is always countable and denoted S\mathcal{S}.

SymbolMeaningIntroduced
P\mathbf{P}Transition matrix with entries pij=P(Xn+1=jXn=i)p_{ij} = \mathbb{P}(X_{n+1} = j \mid X_n = i)s01
Pn\mathbf{P}^{n}nn-step transition matrix with entries pij(n)p_{ij}^{(n)}s01
π\boldsymbol{\pi}Stationary (invariant) distribution: row vector satisfying π=πP\boldsymbol{\pi} = \boldsymbol{\pi}\mathbf{P}s03
S\mathcal{S}State space (countable set)s01
iji \leftrightarrow jStates ii and jj communicates02
iji \to jState jj is accessible from state iis02
d(i)d(i)Period of state iis02
fiif_{ii}Return probability: P(ever return to iX0=i)\mathbb{P}(\text{ever return to } i \mid X_0 = i)s02
1\mathbf{1}Column vector of all oness01