The Multi-Server Coded Caching Model
From One Transmitter to Many
Chapters 5β6 treated a single -antenna transmitter. Chapter 8's C-RAN model had cooperating ENs but was primarily about the fronthaul bottleneck between cloud and edge. This chapter zooms in on the multi-server problem: several independent transmitters (base stations), each with the full library, all cooperating to serve users.
The key question: how does the coded caching gain scale when there are cooperating servers? The answer, roughly speaking, is β the caching gain still adds linearly, and the spatial gain multiplies by the number of cooperating servers. This is effectively the Lampiris-Caire result with , under cooperative assumptions.
This is the theoretical foundation of cell-free massive MIMO + coded caching β the architecture where coded caching meets the most sophisticated wireless cooperation framework.
Definition: Multi-Server Coded Caching Network
Multi-Server Coded Caching Network
The multi-server coded caching network consists of:
- servers (base stations), each with full access to the library and transmit antennas.
- single-antenna users, each with a cache of size files.
- A cooperative wireless channel: user receives a superposition of signals from all servers.
The received signal at user , channel use : where is server 's transmit signal. Per-server power constraint: . Full cooperation means the servers jointly design to achieve the desired beamforming pattern.
The cooperating-servers assumption gives the strongest scheme. In practice, server cooperation requires low-latency inter-server communication (via fiber or dedicated backhaul). Limited-cooperation variants relax this and generally achieve less than the full cooperative DoF.
Theorem: Multi-Server DoF under Full Cooperation
For the multi-server coded caching network with cooperating -antenna servers, integer : The DoF is the Lampiris-Caire result with effective antenna count .
Under full cooperation, the servers act as a single transmitter with antennas (joint precoding). Chapter 5's DoF formula applies directly with . The caching gain is unchanged β caches are per-user, independent of the transmitter count.
Achievability
Consolidate the servers into a single virtual transmitter with antennas. Apply the Lampiris-Caire scheme (Ch 5) directly: DoF = . The precoding is cooperative β each server's beam is a coordinate projection of the virtual beam.
Converse
Cut-set argument: the servers collectively output DoF's worth of signal per channel use. Caches contribute DoF via XOR multicast. Total: by the usual cut-set bound with as the saturation ceiling.
Tight match
Upper and lower bounds coincide.
Multi-Server DoF: Effect of Cooperation
DoF as a function of memory ratio, for varying number of cooperating servers . Blue solid: servers cooperating (DoF = ). Red dashed: single server (DoF = ). The DoF gain from cooperation is β proportional to the cooperation scale.
Parameters
Example: Multi-Server DoF Computation
For , , , servers each with antennas, compute the DoF under full cooperation.
Compute
. .
Decomposition
Caching gain . Aggregate spatial gain . Sum: 17. Per-user: 17/30 β 0.57 DoF.
Comparison
Single server with : DoF = . Three cooperating servers give DoF = 17 β almost doubling throughput. Critically, the caching contribution of is unchanged; what multiplies by 3 is the spatial gain.
Multi-Server Cooperative Caching Topology
Key Takeaway
Full server cooperation multiplies the spatial gain by ; the caching gain stays unchanged. DoF = . This is the Lampiris-Caire result with effective . Caching gain is a per-user property β multiplying transmitters does not multiply it. Practical deployments balance server cooperation (expensive β requires inter-server backhaul) against caching (cheap β local storage).
Cooperation Is Not Free
Full server cooperation has costs:
- Low-latency backhaul. To jointly precode, servers must exchange user data in real time β microsecond-scale backhaul. Fiber between co-located servers is feasible; across a wide area it is not.
- Shared CSI. All servers need accurate channel knowledge for every user; CSI is centrally aggregated and distributed. Chapter 7's pilot overhead analysis applies to the aggregate - antenna virtual transmitter.
- Clock synchronization. Phase-coherent transmission requires microsecond-scale clock sync across servers.
Realistic deployments often use partial cooperation: e.g., cluster servers into groups of 2-4 that cooperate fully, then time-division between clusters. This reduces backhaul and sync requirements while capturing most of the cooperation gain.
The CommIT group has studied this partial-cooperation regime extensively in the cell-free massive MIMO context (Lampiris- Bhattacharjee-Caire 2022+).
Multi-server coded caching topology
Cell-Free Massive MIMO Is the Deployment Target
The multi-server coded caching model maps directly to cell-free massive MIMO (CF-mMIMO), an emerging 6G architecture:
- Many access points (APs). Instead of a few large base stations, CF-mMIMO uses dozens to hundreds of small APs spread through a coverage area.
- Central processing unit (CPU). APs are connected to a CPU via fronthaul; CPU orchestrates cooperation.
- Each AP has modest antennas (). Aggregate: .
- Each AP can cache content. Pre-placement populates AP caches; cooperative delivery exploits combined storage.
In this architecture, Chapter 9's multi-server model is the information-theoretic baseline. The caching gain is practically a function of aggregate storage across APs; the spatial gain is the number of cooperating APs times their local antennas. Both scale to the hundreds in aggressive CF-mMIMO designs.
Deployment path: 6G fog-based massive MIMO (2025+), which the CommIT group has prototyped extensively.
- β’
CF-mMIMO: 20-100 APs per coverage area, each with 2-8 antennas
- β’
Inter-AP backhaul: microsecond latency required for full cooperation
- β’
CSI aggregation at CPU: 10-100 ms periodicity
- β’
Per-AP cache: 10 GB - 1 TB typical for 6G prototypes