Chapter Summary

Chapter Summary

Key Points

  • 1.

    The normal approximation R(n,ϵ)=CV/nQ1(ϵ)+O(logn/n)R^*(n, \epsilon) = C - \sqrt{V/n}\,Q^{-1}(\epsilon) + O(\log n/n) is the fundamental formula for finite-blocklength coding. The channel dispersion V=Var[ι(X;Y)]V = \text{Var}[\iota(X;Y)] governs the speed of convergence to capacity: two channels with the same capacity but different dispersions behave very differently at short blocklengths.

  • 2.

    The RCU bound and meta-converse provide tight, computable, non-asymptotic bounds on R(n,ϵ)R^*(n, \epsilon) that replace the asymptotic achievability-converse pair (random coding + Fano). Together they sandwich the true maximum rate to within a fraction of a bit for most practical channels, even at n100n \sim 100.

  • 3.

    The rate-reliability-blocklength tradeoff is the central design tool for URLLC. At n=200n = 200 and ϵ=105\epsilon = 10^{-5}, the achievable rate can be 20-40% below the Shannon capacity, depending on the channel and SNR. Using the capacity formula for short-blocklength design leads to significant under-provisioning of resources.

  • 4.

    The AWGN dispersion V=SNR(SNR+2)/(2(1+SNR)2)V = \text{SNR}(\text{SNR}+2)/(2(1+\text{SNR})^2) in nats2^2 approaches 1/21/2 at high SNR. In fading channels, the dispersion includes an additional term from fading variance, which dominates at low diversity orders and makes multi-antenna systems essential for URLLC.

  • 5.

    Multi-user finite-blocklength theory extends the normal approximation to MAC and BC. The MAC dispersion is a matrix governing the shrinkage of the capacity region. The sum-rate dispersion equals the point-to-point dispersion at the sum power. Superposition coding remains second-order optimal for the degraded BC.

  • 6.

    Massive MTC and grant-free access operate in the many-access regime where KaK_a grows with nn. The per-user energy-per-bit requirement grows as Θ(logKa)\Theta(\log K_a), fundamentally limiting short-packet random access scalability beyond the V/n\sqrt{V/n} penalty.

Looking Ahead

The finite-blocklength analysis shows that short codes incur a V/n\sqrt{V/n} rate penalty. But what if we could reduce the effective blocklength seen by the receiver by pre-placing content at the user? This is precisely the idea behind coded caching (Chapter 27). By exploiting user caches to create multicast opportunities, coded caching achieves a delivery rate that scales inversely with the cache size, effectively turning memory into bandwidth. The information-theoretic framework of coded caching provides fundamental limits on the memory-rate tradeoff, connecting to the source-channel duality ideas from the first half of this book.