Chapter Summary
Chapter Summary
Key Points
- 1.
The method of types as a refined lens. Every sequence has a type (empirical distribution) , and the type class contains all sequences sharing type . Three fundamental facts drive the entire chapter: (1) the number of types is polynomial , (2) the type class size is , and (3) the probability of a type class under i.i.d. source is . These replace the softer bounds of typicality with exponentially tight estimates.
- 2.
Sanov's theorem: geometry on the simplex. The probability that the empirical distribution falls in a set decays as , where is the minimum KL divergence from to the truth . This converts probability questions into optimization problems on the probability simplex β finding the closest distribution in to in the KL sense.
- 3.
Source coding exponent: redundancy buys reliability. Coding a DMS at rate gives error probability decaying as where . The exponent grows with redundancy and is zero at the Shannon limit.
- 4.
Channel coding exponents: the reliability function. For a DMC at rate , the random coding exponent is achievable and equals the sphere-packing converse for . Below the critical rate, the expurgated exponent improves upon . At capacity, all exponents are zero β the boundary of reliable communication.
- 5.
Theory vs. practice. Error exponents characterize the ultimate reliability of unstructured codes at fixed rates. Practical codes (LDPC, polar, turbo) trade error exponent for decoding complexity, achieving capacity through iterative decoding at rates approaching rather than through exponential reliability at fixed . For short-blocklength regimes (5G URLLC), the finite-blocklength framework (Chapter 26) provides tighter characterizations.
Looking Ahead
Chapter 5 develops lossless source coding β the practical counterpart to the source coding exponent studied here. We will see how Huffman codes, arithmetic coding, and Lempel-Ziv achieve the entropy limit, and the method of types will reappear when we analyze the universality of LZ compression.