The Memory-Load Theorem and Its Proof
The Headline Result
We are now ready for the headline result of coded caching. Having built the placement (§2.2) and verified the delivery (§2.3), the rate formula falls out as a direct counting argument. The result is clean, memorable, and — when first seen — slightly astonishing: the rate decomposes into a local factor times a coded factor , with the latter growing the total gain without bound in .
Theorem: Maddah-Ali–Niesen Memory-Load Theorem
Let , , and , i.e., is integer. Then the MAN scheme achieves the worst-case delivery rate
We send coded messages, each of size bits. Total bits is . Rate equals total bits divided by : done. The only remaining question is whether a shorter scheme exists — that is Chapter 3.
Count coded messages
The delivery phase transmits one coded message per -subset of . Number of such subsets: .
Size of each coded message
Each is an XOR of subfiles, each of size bits. The XOR preserves size, so bits.
Total delivery bits
\binom{K}{t+1}/\binom{K}{t} = (K-t)/(t+1)$.
Rate in file units
The rate is the total delivery bits divided by :
Substitute $t = KM/N$
With :
Correctness (already proven in §2.3)
Theorem §2.3 showed every user decodes its requested file. Hence the MAN scheme is a valid scheme with .
Key Takeaway
Coded caching gain = . Compared with uncoded delivery (), the MAN scheme divides the rate by . This is a multiplicative saving that grows with the aggregate cache size, not just per-user cache. For large networks this gain dominates the trivial local cache gain.
Memory–Load Tradeoff: MAN vs. Uncoded
The definitive plot of Chapter 2. The blue solid curve is the MAN rate ; the red dashed curve is the uncoded baseline . Comparison curves at and show that the MAN rate shrinks as grows: more users = more coded multicasting opportunities. For at fixed , the MAN rate saturates at , a finite limit independent of .
Parameters
MAN Rate vs. Number of Users (at Fixed )
As grows at fixed memory ratio , the uncoded rate grows linearly, while the MAN rate saturates at . The gap between the two curves is the coded caching gain , which is unbounded in . This is the single most important qualitative feature of coded caching.
Parameters
The Maddah-Ali–Niesen Scheme — Framework
The MAN scheme opened the field of coded caching. Though the 2014 paper lists Maddah-Ali and Niesen as authors (Caire was in the acknowledgments), the subsequent CommIT program — extending the scheme to wireless, D2D, correlated-files, privacy, and cloud-RAN settings — is a sustained collaboration led by Caire. The basic scheme presented in this chapter is the "shared link" version; every later chapter builds on this scaffold, so mastering it is non-optional.
Key insight (restated). The placement is combinatorial (each subfile in exactly caches, uniformly distributed); the delivery is coded (one XOR per -subset). Together they achieve — a rate that is later proven optimal under uncoded placement (Chapter 3).
Two Illuminating Extremes
The formula has two informative limits:
Small (): . Almost no coding gain — each user needs a nearly full unicast, as caches are too small to create multicast opportunities.
Large (): . Naturally — if every user can cache the whole library, no delivery is needed.
Intermediate ( moderate): The gain is substantial. For , the MAN rate is of the uncoded rate. This is the regime where coded caching shines.
Common Mistake: Rate Is in File Units, Not Bits
Mistake:
Stating "" without specifying the unit and being confused when comparing to capacity results.
Correction:
is measured in files per shared-link channel use, where one shared-link channel use has capacity bits (in the error-free model). A rate means the server sends bits per delivery round to satisfy all users. Equivalently, the rate is dimensionless (files/files). When you later combine coded caching with a noisy channel of capacity bits/sec, the effective file delivery time is seconds per round.
Quick Check
For , , , compute the MAN worst-case rate.
Correct. . file units. Equivalently: .