Chapter Summary

Chapter Summary

Key Points

  • 1.

    The method of types as a refined lens. Every sequence x∈Xn\mathbf{x} \in \mathcal{X}^n has a type (empirical distribution) P^x\hat{P}_{\mathbf{x}}, and the type class TQT_Q contains all sequences sharing type QQ. Three fundamental facts drive the entire chapter: (1) the number of types is polynomial (n+1)∣X∣(n+1)^{|\mathcal{X}|}, (2) the type class size is ∣TQβˆ£β‰2nH(Q)|T_Q| \doteq 2^{nH(Q)}, and (3) the probability of a type class under i.i.d. source PP is Pn(TQ)≐2βˆ’nD(Qβˆ₯P)P^n(T_Q) \doteq 2^{-nD(Q \| P)}. These replace the softer bounds of typicality with exponentially tight estimates.

  • 2.

    Sanov's theorem: geometry on the simplex. The probability that the empirical distribution falls in a set E\mathcal{E} decays as 2βˆ’nDβˆ—(Eβˆ₯P)2^{-nD^*(\mathcal{E} \| P)}, where Dβˆ—D^* is the minimum KL divergence from E\mathcal{E} to the truth PP. This converts probability questions into optimization problems on the probability simplex β€” finding the closest distribution in E\mathcal{E} to PP in the KL sense.

  • 3.

    Source coding exponent: redundancy buys reliability. Coding a DMS at rate R>H(P)R > H(P) gives error probability decaying as 2βˆ’nEs(R)2^{-nE_s(R)} where Es(R)=min⁑Q:H(Q)β‰₯RD(Qβˆ₯P)E_s(R) = \min_{Q : H(Q) \geq R} D(Q \| P). The exponent grows with redundancy Rβˆ’H(P)R - H(P) and is zero at the Shannon limit.

  • 4.

    Channel coding exponents: the reliability function. For a DMC at rate R<CR < C, the random coding exponent Er(R)E_r(R) is achievable and equals the sphere-packing converse Esp(R)E_{sp}(R) for Rβ‰₯RcrR \geq R_{cr}. Below the critical rate, the expurgated exponent Eex(R)E_{ex}(R) improves upon ErE_r. At capacity, all exponents are zero β€” the boundary of reliable communication.

  • 5.

    Theory vs. practice. Error exponents characterize the ultimate reliability of unstructured codes at fixed rates. Practical codes (LDPC, polar, turbo) trade error exponent for decoding complexity, achieving capacity through iterative decoding at rates approaching CC rather than through exponential reliability at fixed R<CR < C. For short-blocklength regimes (5G URLLC), the finite-blocklength framework (Chapter 26) provides tighter characterizations.

Looking Ahead

Chapter 5 develops lossless source coding β€” the practical counterpart to the source coding exponent studied here. We will see how Huffman codes, arithmetic coding, and Lempel-Ziv achieve the entropy limit, and the method of types will reappear when we analyze the universality of LZ compression.