Chapter Summary
Chapter Summary
Key Points
- 1.
Multi-server coded caching treats a network with cooperating transmitters, each with the library and antennas. Under full cooperation, the servers act as a single -antenna transmitter with joint precoding.
- 2.
DoF formula: . The caching gain remains unchanged; spatial gain multiplies by . This extends Chapter 5's Lampiris-Caire result.
- 3.
Shared vs dedicated caches. Dedicated: one cache per user, MAN baseline. Shared: caches across users, with users per cache. Rate per user scales linearly in ; at equal aggregate storage, shared can be more efficient when is moderate (fewer but larger caches).
- 4.
The multi-server MAN scheme uses the standard MAN placement plus a cooperatively-precoded delivery: -subsets, streams per subset. Per-server rate: .
- 5.
Cell-free massive MIMO + caching is the natural 6G deployment. Caches distributed across APs give aggregate gain ; cooperation gives spatial DoF. Total per-user DoF can reach near-unity in dense deployments.
- 6.
Cooperation is expensive. Full cooperation requires low-latency inter-server backhaul. Practical deployments cluster 2-8 servers for full cooperation and time-share between clusters. The CommIT group's fog-mMIMO program prototypes these tradeoffs.
- 7.
Distinguishing architectures. MAN user caching () vs AP caching in CF-mMIMO (): the relevant aggregate cache differs. Don't confuse them.
Looking Ahead
Chapter 10 transitions to D2D (device-to-device) caching: users exchange content directly without a central server. The CommIT result (Ji-Caire-Molisch 2016): throughput scales as per user for random demands. Chapter 11 combines D2D with coded multicasting to analyze when these gains compound (they don't — a CommIT scaling-law insight). The multi-server architecture of Chapter 9 is thus completed by the D2D fully-distributed case of Chapters 10–11.