Coded Caching over Wireless Channels
From Error-Free Links to Wireless Channels
The MAN framework assumes a shared error-free link of unlimited bandwidth. Real wireless systems have a noisy broadcast channel with limited capacity, and the server (base station) may have multiple antennas. The natural question is: how does the coded multicasting gain interact with the spatial multiplexing gain of MIMO?
The answer, discovered by Shariatpanahi, Caire, and Khalaj, is surprisingly clean: the two gains multiply. With transmit antennas and coded caching parameter , the server can simultaneously serve users per transmission, achieving a normalized delivery time that decreases as . The spatial multiplexing gain and the coded multicasting gain are additive in the DoF sense, giving a total DoF of (or more precisely, ).
Definition: Normalized Delivery Time (NDT)
Normalized Delivery Time (NDT)
For a wireless coded caching system with a noisy broadcast channel, the normalized delivery time is:
where is the time required to satisfy all demands and is the capacity of the point-to-point link (single user, no caching). The NDT measures the delivery time relative to the time needed to send one file to one user without caching.
For the MAN scheme over an error-free shared link: .
For a MISO broadcast channel with transmit antennas: where .
Normalized delivery time
The ratio of the total delivery time to the time needed to send one file to one user over the point-to-point channel. The NDT captures the combined gain from caching and multi-antenna transmission.
Related: Coded caching, Coded multicasting gain
Theorem: Multi-Antenna Coded Caching: Multiplicative Gain
For a MISO broadcast channel with transmit antennas, single-antenna users, and coded caching parameter , the achievable NDT is:
for integer . The total DoF is , which is the sum of the spatial multiplexing gain and the coded multicasting gain .
In particular, with antennas and cache ratio , we achieve NDT , meaning all users are served in the time it takes to send one file to one user. This is the ultimate limit.
The key insight is that multi-antenna beamforming and coded multicasting are complementary. Beamforming creates parallel spatial channels, and coded caching creates multicast groups of size . The server can serve multicast groups simultaneously, each of size , for a total of users per transmission slot. However, the total is capped at (cannot serve more users than exist), giving users per slot.
The additive DoF arises because each beamforming direction can carry one multicast message, and the coded caching gain acts within each beamforming direction.
Multicast-aware zero-forcing
Partition the multicast groups into batches of groups. For each batch, the server uses zero-forcing beamforming to transmit multicast messages simultaneously, one per spatial direction.
Each multicast message is the XOR as in the MAN scheme. The beamforming vector for the message addressed to group lies in the null space of the channels of all users NOT in who would experience interference.
Interference management
The crucial observation is that each coded message is intended for all users in . Since each user in can cancel of the XOR terms using its cache, the interference from the other beamforming directions can be managed by the spatial degrees of freedom.
Specifically, with antennas and simultaneous multicast groups, each user needs to decode one message from transmitted signals, requiring the desired signal to be in a 1-dimensional subspace orthogonal to interferers. This is feasible when (standard ZF condition).
Counting the DoF
Per transmission slot: beamforming directions users per multicast message user-demands served. The total number of coded messages is , requiring time slots. NDT . For : NDT .
Multi-Antenna Coded Caching: NDT vs Cache Size
Visualize how the normalized delivery time decreases with cache size for different numbers of transmit antennas. The spatial multiplexing gain and coded multicasting gain multiply, dramatically reducing delivery time.
Parameters
Multi-Antenna Coded Caching: Additive DoF
Example: MISO Coded Caching with , ,
A base station with antennas serves users, each caching of the library. Compute the NDT and compare with: (a) no caching (), (b) caching without coding, (c) coded caching without MIMO ().
MISO coded caching
, . Total DoF: . Since : NDT .
The server delivers all demands in the time of one file transfer.
No caching ($M = 0$, $t = 0$)
NDT (serve 2 users at a time via MIMO).
Uncoded caching
NDT (MIMO without multicast gain).
Coded caching without MIMO ($L = 1$)
NDT .
Summary
| Scheme | NDT |
|---|---|
| No caching, MIMO | 3.0 |
| Uncoded caching, MIMO | 2.0 |
| Coded caching, no MIMO | 1.33 |
| Coded caching + MIMO | 1.0 |
The combined scheme achieves a improvement over no caching.
Definition: Coded Caching in Fog-RAN
Coded Caching in Fog-RAN
In a Fog-RAN (Fog Radio Access Network) architecture, edge nodes (small base stations or APs) have local caches but are connected to a cloud processor via finite-capacity fronthaul links. The system combines:
- Edge caching: Each of edge nodes caches files
- Fronthaul delivery: The cloud sends coded messages over fronthaul links of capacity each
- Wireless delivery: Edge nodes transmit to users over a wireless BC
The NDT framework extends to Fog-RAN:
where is the user-side cache, is the edge-side cache, and is the user-side coded caching gain.
The edge cache reduces the effective library size (files already at the edge need not be fetched from the cloud), while the user cache provides the coded multicasting gain. The two caching layers are complementary.
Why This Matters: Content Delivery Networks and Edge Caching
Coded caching provides the information-theoretic foundation for next-generation CDN architectures. Current CDNs (Akamai, Cloudflare, Netflix Open Connect) use uncoded caching at edge servers, achieving local gains only. The coded caching framework shows that coordinated placement across users can provide an additional global gain. In 5G and beyond, Multi-access Edge Computing (MEC) platforms at base stations could implement coded caching for video delivery, with the user-side cache residing on the mobile device.
See Book cc for the complete treatment of coded caching systems.
Common Mistake: CSIT Requirements for Multi-Antenna Coded Caching
Mistake:
Assuming that the multiplicative MIMO-caching gain can be achieved without channel state information at the transmitter (CSIT).
Correction:
The zero-forcing beamforming used in the achievability proof requires full CSIT. Without CSIT, the spatial multiplexing gain is lost, and the system falls back to the shared-link model with DoF . With partial CSIT (e.g., delayed CSIT or statistical CSIT), intermediate gains are achievable. The interplay between CSIT quality and coded caching gain is an active research area.
Historical Note: The MIMO-Caching Connection
2016-2019The connection between multi-antenna transmission and coded caching was established around 2016-2018 by several groups independently. Shariatpanahi, Caire, and Khalaj showed the additive DoF result for the MISO broadcast channel. Lampiris and Elia proved the optimality of this result under one-shot linear delivery. The Fog-RAN extension was developed by Tandon and others, connecting coded caching to the C-RAN framework of Chapter 25. Caire's group played a central role in establishing the wireless coded caching framework, particularly the NDT formulation and the D2D extension (Section 27.4). The elegance of the additive DoF result β spatial multiplexing plus multicast gain, simply added β was one of the pleasant surprises of the field.
Quick Check
A base station with antennas serves users with coded caching parameter . What is the total DoF (users served per time slot)?
The total DoF is . Each time slot serves 10 users: beamforming directions, each carrying a multicast message for users, but accounting for the structure gives DoF .
Edge Caching in 5G NR and Beyond
3GPP has not standardized coded caching as of Release 18, but the building blocks are in place: (1) Multi-access Edge Computing (MEC) provides caching at base stations, (2) Device-to-device (D2D) ProSe enables the D2D caching extension, and (3) multicast/broadcast services (MBS) in Release 17 enable multicast delivery. The main standardization barrier is the signaling overhead for coordinated placement and the subpacketization challenge for large . Practical deployments are more likely in controlled environments (stadium WiFi, enterprise networks) where the number of users and content catalog are bounded.
- β’
MEC cache sizes: 1-100 TB per edge server
- β’
Mobile device cache: 1-10 GB for video prefetching
- β’
Typical video catalog for coded caching: 100-1000 popular titles
- β’
Signaling overhead for cache coordination: still an open practical challenge