Decentralized Coded Caching

Centralized Placement Is Unrealistic

The MAN scheme (Ch 2) requires a central server that coordinates which specific combinatorial subsets each user caches. In real systems — especially D2D and edge networks — coordination at this level is impractical. Users may arrive and depart dynamically; their caches may have been populated at different times by different mechanisms. A central coordinator cannot ensure the MAN pattern across an arbitrarily large population.

The decentralized coded caching scheme (Maddah-Ali–Niesen, 2015) resolves this: each user independently caches a random subset of the library. No coordination needed. Asymptotically, as NN \to \infty, the decentralized scheme achieves the same rate as centralized MAN. This makes coded caching deployable at CDN scale.

Definition:

Decentralized Random Caching

In decentralized random caching, each user kk independently samples a cache content of MFMF bits uniformly at random from the NFNF-bit library. The placement is: Zk  =  uniform random subset of library bits, size MF.\mathcal{Z}_{k} \;=\; \text{uniform random subset of library bits, size } M F. No coordination across users. The delivery phase adapts to the realized placement pattern.

Equivalently: each bit of the library is cached independently with probability M/NM/N per user. Over KK users, each bit is cached Kμ\sim K \cdot \mu times in expectation.

Theorem: Decentralized MAN Rate

The decentralized random-placement scheme achieves per-user rate (asymptotically in NN): Rdec(M)  =  K(1μ)Kμ(1(1μ)K).R_{\text{dec}}(M) \;=\; \frac{K(1 - \mu)}{K \mu} \cdot \left(1 - (1 - \mu)^{K}\right). As KK \to \infty at fixed μ\mu, Rdec(1μ)/μR_{\text{dec}} \to (1 - \mu)/\mu — matching the centralized MAN asymptote exactly.

Decentralized placement replaces MAN's deterministic subset structure with a random one. The XOR delivery adapts to the realized random pattern. In the asymptotic regime (K,FK, F \to \infty), the random pattern approaches the MAN structure with high probability, so the rates match.

Decentralized vs Centralized MAN

Rates converge as KK \to \infty: decentralized (dashed) approaches centralized MAN (solid). Both tend to the asymptote (1μ)/μ(1-\mu)/\mu. At small KK, decentralized has slight gap to MAN; gap closes quickly.

Parameters
0.2
100

Key Takeaway

Decentralized random placement matches centralized MAN asymptotically. No coordination required; each user independently caches random bits. This is the practical coded caching scheme: simple to implement, robust to user dynamics, asymptotically order-optimal. It is the basis for most deployable coded-caching designs.

Ji-Tulino-Llorca-Caire (2017) — Zipf Extension

Maddah-Ali–Niesen's decentralized result assumes uniform demand. The Ji-Tulino-Llorca-Caire extension (2017) generalizes to Zipf-α\alpha demand. The key insight: under Zipf, an optimal decentralized placement caches popular files at higher density and unpopular files at lower density. The per-user cache allocation becomes: Pr(user caches file n)  =  min(1,  μNPnβ)\Pr(\text{user caches file } n) \;=\; \min(1,\; \mu \cdot N \cdot P_n^{\beta}) for some tuning parameter β[0,1]\beta \in [0, 1]. For β=0\beta = 0: uniform (original). For β=1\beta = 1: strictly proportional to popularity (deterministic popularity caching).

Order-optimal choice: β=α/2\beta = \alpha/2 approximately. This popularity-weighted decentralized scheme is within a constant factor of the expected-rate optimum for Zipf demand.

Decentralized Random Placement (MN 2015)

Complexity: Off-peak: O(KNF)O(KNF) bit-level decisions. Asymptotic cache size μNF=MF\approx \mu NF = MF bits per user with high probability.
Input: Library {W1,,WN}\{W_1, \ldots, W_N\}; users 1,,K1, \ldots, K;
per-user cache size MM files; population fraction μ=M/N\mu = M/N.
Output: User caches Z1,,ZK\mathcal{Z}_1, \ldots, \mathcal{Z}_K.
1. for each user k=1,,Kk = 1, \ldots, K do
2. \quad for each bit bb of every file WnW_n do
3. \quad\quad Include bit bb in Zk\mathcal{Z}_k independently with probability μ\mu.
4. \quad end for
5. end for
6. return (Z1,,ZK)(\mathcal{Z}_1, \ldots, \mathcal{Z}_K).

In practice, one samples random blocks (e.g., 1 MB each) rather than individual bits, to reduce indexing overhead. The analysis carries over with minor modifications.

⚠️Engineering Note

Decentralized Caching in Production CDNs

Most production CDNs already use something close to decentralized caching:

  1. LFU / LRU. Each cache independently admits/evicts based on local access patterns. This is effectively decentralized with a popularity-aware bias.
  2. Randomized LRU (e.g., Windowed TinyLFU). Random-sampling- based eviction; closer to the theoretical random-placement model.
  3. Coordinated caching (CDN internal). Within a single operator's CDN fabric, caches can be coordinated explicitly (Akamai's proprietary schemes). This is centralized.
  4. Federated / multi-operator. Across operators, no coordination; each operator's caches are decentralized with respect to others. The MN decentralized analysis applies directly.

The deployability of decentralized coded caching hinges on (i) shifting from LRU/LFU to random admission policies and (ii) adding the coded-XOR delivery layer. Both are supported by emerging 6G edge architectures.

Practical Constraints
  • Akamai / Cloudflare: mostly LRU-based, coordinated within ASN

  • BitTorrent / P2P: decentralized content distribution

  • Edge-caching 3GPP Rel-18+: supports coordinated caching APIs

  • Decentralized schemes: asymptotically optimal, simpler to deploy