Chapter Summary
Chapter Summary
Key Points
- 1.
The cache-aided cloud-RAN architecture places caches of size at each of edge nodes, connected to a central cloud via per-EN fronthaul capacity . The aggregate caching gain is .
- 2.
Normalized delivery time (NDT) is a dimensionless latency metric normalized to the infinite-fronthaul MU-MIMO baseline. NDT = 1 is the ideal; NDT > 1 reflects the architecture's bottleneck.
- 3.
NDT formula. Achievable upper bound: . Combines the downlink cooperative Lampiris-Caire DoF term with the fronthaul transfer term. Cut-set lower bound matches when .
- 4.
Cache-fronthaul substitutability. Iso-NDT contours are hyperbolae in the plane. Doubling either resource reduces its contribution to NDT by half. Operators choose along the Pareto frontier based on cost structure.
- 5.
The CommIT NDT framework (Sengupta-Tandon-Simeone, 2017, with Caire-adjacent follow-ups) is the unified language for cache- aided C-RAN analysis. It cleanly isolates the cache-fronthaul tradeoff from the underlying SNR.
- 6.
Saturation behavior. : NDT = 1 (full cache, no fronthaul needed). : NDT reduces to the Lampiris- Caire DoF formula for cooperative -antenna BC.
- 7.
Deployment implications. 5G NR C-RAN with and achieves NDT at typical fronthaul files/use. Design tools benefit from the NDT framework when sizing fronthaul and cache budgets.
Looking Ahead
Chapter 9 treats multi-server coded caching: multiple independent transmitters (e.g., cooperating base stations) each holding the full library, coordinated at the placement/delivery level. The multi-server MAN scheme extends the single-transmitter rate formula and sheds light on the differences between shared- cache and dedicated-cache models. Chapters 10-11 then move to D2D networks where users themselves are transmitters.