Chapter Summary

Chapter Summary

Key Points

  • 1.

    The cache-aided block-fading BC extends Chapter 5's static MIMO BC with a coherence block length TcT_c and per-block pilot overhead τ\tau. CSIT quality determines the realizable spatial multiplexing gain LL.

  • 2.

    CSIT scales the spatial DoF, not the caching DoF. With estimation error σe2/σ2\sigma_e^2/\sigma^2, DoF=t+L(1σe2/σ2)+\mathrm{DoF} = t + L(1 - \sigma_e^2/\sigma^2)_+. Caching gain tt is CSIT-independent; spatial gain is not. On CSIT-poor channels (mmWave, high mobility), caching is disproportionately valuable.

  • 3.

    Pilot overhead caps spatial DoF at Tc/4T_c/4. With τ=L\tau = L pilots per TcT_c-length block, effective spatial DoF = L(1L/Tc)L(1 - L/T_c), maximized at L=Tc/2L^* = T_c/2 for Tc/4T_c/4 spatial DoF. Caching gain tt is pilot-free and adds on top.

  • 4.

    Blind interference management. In the no-CSIT regime, pure MIMO DoF collapses to 1 but cache-aided delivery achieves DoF=t+1\mathrm{DoF} = t + 1. Cached side information substitutes for CSIT; XOR cancellation works without channel knowledge.

  • 5.

    Outage analysis: group size matters. The multicast group size t+Lt + L determines the worst-user outage penalty. Larger groups yield higher DoF but worse outage rate. Optimal group sizing requires trade-off analysis at the target outage level.

  • 6.

    High-mobility scenarios benefit most from caching. At Tc=100T_c = 100, L=32L = 32, μ=0.2\mu = 0.2 and 20+ users, the caching gain contributes roughly half the deliverable DoF. Without caching, the spatial pipeline is capped by the pilot wall.

  • 7.

    Design implication. Cache-aided designs should be preferred over CSIT-hungry massive MIMO in CSIT-poor environments. For vehicular, aerial, and mmWave deployments, coded caching is a primary DoF resource, not an afterthought.

Looking Ahead

Chapter 8 moves to a different architecture: the cloud-RAN with edge nodes (ENs) holding caches and a central cloud with limited fronthaul capacity CC. The CommIT NDT (normalized delivery time) framework characterizes the cloud-edge tradeoff. Cache-fronthaul substitutability is captured by the NDT surface Δ(M,C)\Delta(M, C). Chapter 9 then treats the multi-server case: multiple cooperating servers, shared and dedicated cache models, and cooperative coded multicasting.