The Multi-Server Coded Caching Model

From One Transmitter to Many

Chapters 5–6 treated a single LL-antenna transmitter. Chapter 8's C-RAN model had NENN_\text{EN} cooperating ENs but was primarily about the fronthaul bottleneck between cloud and edge. This chapter zooms in on the multi-server problem: several independent transmitters (base stations), each with the full library, all cooperating to serve users.

The key question: how does the coded caching gain tt scale when there are SS cooperating servers? The answer, roughly speaking, is DoF=t+SL\mathrm{DoF} = t + S L β€” the caching gain still adds linearly, and the spatial gain multiplies by the number of cooperating servers. This is effectively the Lampiris-Caire result with Leff=SLL_\text{eff} = SL, under cooperative assumptions.

This is the theoretical foundation of cell-free massive MIMO + coded caching β€” the architecture where coded caching meets the most sophisticated wireless cooperation framework.

Definition:

Multi-Server Coded Caching Network

The multi-server coded caching network consists of:

  • SS servers (base stations), each with full access to the library W={W1,…,WN}\mathcal{W} = \{W_1, \ldots, W_{N}\} and LL transmit antennas.
  • KK single-antenna users, each with a cache of size MM files.
  • A cooperative wireless channel: user kk receives a superposition of signals from all SS servers.

The received signal at user kk, channel use mm: yk[m]β€…β€Š=β€…β€Šβˆ‘i=1Shk,iHxi[m]+wk[m],y_k[m] \;=\; \sum_{i=1}^S \mathbf{h}_{k,i}^H \mathbf{x}_i[m] + \mathbf{w}_{k}[m], where xi∈CL\mathbf{x}_i \in \mathbb{C}^L is server ii's transmit signal. Per-server power constraint: E[βˆ₯xiβˆ₯2]≀P\mathbb{E}[\|\mathbf{x}_i\|^2] \leq P. Full cooperation means the SS servers jointly design {xi}\{\mathbf{x}_i\} to achieve the desired beamforming pattern.

The cooperating-servers assumption gives the strongest scheme. In practice, server cooperation requires low-latency inter-server communication (via fiber or dedicated backhaul). Limited-cooperation variants relax this and generally achieve less than the full cooperative DoF.

Theorem: Multi-Server DoF under Full Cooperation

For the multi-server coded caching network with SS cooperating LL-antenna servers, integer t=KM/Nt = K M/N: DoF(M)β€…β€Š=β€…β€Šmin⁑(t+SL,β€…β€ŠK).\mathrm{DoF}(M) \;=\; \min(t + S L,\; K). The DoF is the Lampiris-Caire result with effective antenna count Leff=SLL_\text{eff} = SL.

Under full cooperation, the SS servers act as a single transmitter with SLSL antennas (joint precoding). Chapter 5's DoF formula applies directly with L→SLL \to SL. The caching gain tt is unchanged — caches are per-user, independent of the transmitter count.

Multi-Server DoF: Effect of Cooperation

DoF as a function of memory ratio, for varying number of cooperating servers SS. Blue solid: SS servers cooperating (DoF = t+SLt + SL). Red dashed: single server (DoF = t+Lt + L). The DoF gain from cooperation is SLβˆ’L=(Sβˆ’1)LSL - L = (S-1)L β€” proportional to the cooperation scale.

Parameters
20
2
2

Example: Multi-Server DoF Computation

For K=30K = 30, N=60N = 60, M=10M = 10, S=3S = 3 servers each with L=4L = 4 antennas, compute the DoF under full cooperation.

Multi-Server Cooperative Caching Topology

Three cooperating servers each with LL antennas; users each with cache Zk\mathcal{Z}_k. Full cooperation (dashed lines) provides effective SLSL-antenna joint precoding. DoF = min⁑(t+SL,K)\min(t + SL, K) generalizes the Lampiris-Caire formula to multi-transmitter networks.

Key Takeaway

Full server cooperation multiplies the spatial gain by SS; the caching gain stays unchanged. DoF = t+SLt + SL. This is the Lampiris-Caire result with effective L=SLL = SL. Caching gain is a per-user property β€” multiplying transmitters does not multiply it. Practical deployments balance server cooperation (expensive β€” requires inter-server backhaul) against caching (cheap β€” local storage).

Cooperation Is Not Free

Full server cooperation has costs:

  1. Low-latency backhaul. To jointly precode, servers must exchange user data in real time β€” microsecond-scale backhaul. Fiber between co-located servers is feasible; across a wide area it is not.
  2. Shared CSI. All servers need accurate channel knowledge for every user; CSI is centrally aggregated and distributed. Chapter 7's pilot overhead analysis applies to the aggregate SLSL- antenna virtual transmitter.
  3. Clock synchronization. Phase-coherent transmission requires microsecond-scale clock sync across servers.

Realistic deployments often use partial cooperation: e.g., cluster servers into groups of 2-4 that cooperate fully, then time-division between clusters. This reduces backhaul and sync requirements while capturing most of the cooperation gain.

The CommIT group has studied this partial-cooperation regime extensively in the cell-free massive MIMO context (Lampiris- Bhattacharjee-Caire 2022+).

Multi-server coded caching topology

Multi-server coded caching topology
S=3S = 3 cooperating servers, each with LL antennas. Users each have cache Zk\mathcal{Z}_k. Servers exchange data via inter-server backhaul (dashed lines). Wireless downlink is MIMO BC with SLS L effective cooperating antennas.
⚠️Engineering Note

Cell-Free Massive MIMO Is the Deployment Target

The multi-server coded caching model maps directly to cell-free massive MIMO (CF-mMIMO), an emerging 6G architecture:

  1. Many access points (APs). Instead of a few large base stations, CF-mMIMO uses dozens to hundreds of small APs spread through a coverage area.
  2. Central processing unit (CPU). APs are connected to a CPU via fronthaul; CPU orchestrates cooperation.
  3. Each AP has modest antennas (L∈[2,16]L \in [2, 16]). Aggregate: Sβ‹…L∈[100,1600]S \cdot L \in [100, 1600].
  4. Each AP can cache content. Pre-placement populates AP caches; cooperative delivery exploits combined storage.

In this architecture, Chapter 9's multi-server model is the information-theoretic baseline. The caching gain tt is practically a function of aggregate storage across APs; the spatial gain SLSL is the number of cooperating APs times their local antennas. Both scale to the hundreds in aggressive CF-mMIMO designs.

Deployment path: 6G fog-based massive MIMO (2025+), which the CommIT group has prototyped extensively.

Practical Constraints
  • β€’

    CF-mMIMO: 20-100 APs per coverage area, each with 2-8 antennas

  • β€’

    Inter-AP backhaul: microsecond latency required for full cooperation

  • β€’

    CSI aggregation at CPU: 10-100 ms periodicity

  • β€’

    Per-AP cache: 10 GB - 1 TB typical for 6G prototypes