Chapter Summary

Chapter Summary

Key Points

  • 1.

    Coded caching converts memory into bandwidth. The MAN scheme achieves delivery load R=K(1M/N)/(1+KM/N)R = K(1-M/N)/(1+KM/N), where the coded multicasting gain 1+KM/N1+KM/N grows linearly with the number of users. Each coded multicast message (an XOR of sub-files) simultaneously satisfies the demands of t+1t+1 users, each using its cached side information to cancel unwanted terms.

  • 2.

    The MAN scheme is optimal under uncoded placement. For NKN \ge K, no scheme with uncoded cache placement can achieve a lower worst-case delivery load. Even with arbitrarily complex coded placement, the load cannot be reduced by more than a factor of 2. The optimality proof uses an induction argument combined with index coding converses.

  • 3.

    Multi-antenna coded caching achieves additive DoF L+tL + t. With LL transmit antennas and coded caching parameter t=KM/Nt = KM/N, the spatial multiplexing gain and coded multicasting gain are complementary. The server can serve min(L+t,K)\min(L+t, K) user-demands per time slot by combining zero-forcing beamforming with coded multicast messages.

  • 4.

    D2D coded caching achieves Θ(M/N)\Theta(M/N) per-user throughput independent of KK. In serverless D2D networks, coded caching combined with spatial reuse provides perfect throughput scaling: adding users does not reduce per-user throughput because each new user contributes both demand and cached content.

  • 5.

    Extensions preserve the core gain. Correlated files reduce the effective library size (Wan/Tuninetti/Ji/Caire). Demand privacy can be achieved with negligible overhead (Wan/Caire). The Fog-RAN extension combines edge caching with user caching for two-layer gains.

  • 6.

    Subpacketization is the main practical barrier. The MAN scheme requires splitting each file into (Kt)\binom{K}{t} sub-files, which grows exponentially in KK. Practical deployments use user clustering, combinatorial designs, or placement delivery arrays to control subpacketization at the cost of reduced multicasting gain.

Looking Ahead

Chapters 25-27 have explored three emerging paradigms: cooperative communication (turning distributed nodes into virtual MIMO), finite-blocklength theory (characterizing performance at practical code lengths), and coded caching (converting memory into bandwidth). Chapter 28 takes yet another direction, exploring the intersection of machine learning and information theory. Deep learning has revolutionized signal processing, but can it also improve coding and decoding? And conversely, can information-theoretic tools help us understand why deep learning works? These questions connect the classical theory of this book to the most active area of modern engineering research.