Shared vs Dedicated Cache Models

Who Owns the Cache?

Up to now, each user had its own cache β€” a dedicated-cache model. Many real architectures have a different structure: shared caches, where multiple users share access to a single cache. Examples:

  • A Wi-Fi access point's cache serves all devices in its coverage.
  • An apartment building's edge cache serves all residents.
  • A content-delivery gateway serves many subscribers.

Shared caches change the coded-caching analysis. The placement must anticipate that several users might draw from the same cache β€” caching popular content here is more valuable; caching user- specific content makes less sense. This section formalizes the two models and compares their rate-memory tradeoffs.

Definition:

Dedicated-Cache Model

In the dedicated-cache model, each user k∈[K]k \in [K] has its own cache Zk\mathcal{Z}_k of size MM files. This is the standard MAN model (Chapter 2) and all prior chapters. The coded caching gain is t=KM/Nt = K M/N.

Definition:

Shared-Cache Model

In the shared-cache model, there are Ξ›\Lambda caches, each of size MsM_{s} files. Each user is associated with exactly one cache; a group of users sharing cache Ξ»\lambda all have access to ZΞ»\mathcal{Z}_\lambda. Let Ks=K/Ξ›K_s = K/\Lambda denote the average number of users per cache. The aggregate cache budget is Ξ›Ms\Lambda M_s files; the per-user effective memory is Meff=MsΞ›/K=Ms/KsM_\text{eff} = M_s \Lambda/K = M_s/K_s.

For fair comparison with the dedicated model: set Ms=KsMM_s = K_s M so that the total system storage is the same as a dedicated-cache system with per-user cache MM.

Shared caches are a natural fit for small-cell deployments where several nearby users share a micro-cell's cache. The number of users per cache KsK_s depends on user density and cache distribution.

Theorem: Shared-Cache MAN Rate

For the shared-cache model with Ξ›\Lambda caches, each of size MsM_s, and KsK_s users per cache (so K=Ξ›KsK = \Lambda K_s), the achievable multicast rate under a MAN-style scheme is Rshared(Ms,Ks,Ξ›)β€…β€Š=β€…β€ŠKsβ‹…Ξ›(1βˆ’Ms/N)1+Ξ›Ms/N.R_\text{shared}(M_{s}, K_s, \Lambda) \;=\; K_s \cdot \frac{\Lambda(1 - M_{s}/N)}{1 + \Lambda M_{s}/N}. For Ks=1K_s = 1 (one user per cache), this reduces to the standard MAN rate with K=Ξ›K = \Lambda users. For Ks>1K_s > 1, the rate scales with KsK_s linearly β€” each extra user per cache adds its own demand without helping the multicast.

Users sharing a cache have access to the same ZΞ»\mathcal{Z}_\lambda. They do not get individualized XOR cancellation opportunities beyond what any one of them would. The MAN scheme effectively runs on Ξ›\Lambda "super-users" (one per cache) plus KsK_s duplicates each. Rate: Ksβ‹…RMAN(Ξ›,Ms)K_s \cdot R_\text{MAN}(\Lambda, M_s) where RMANR_\text{MAN} is the dedicated-cache rate at size Ξ›\Lambda.

Shared vs Dedicated Caches

Comparing shared-cache and dedicated-cache rates at equal aggregate storage. The dedicated model has one cache per user; the shared model pools users into groups of KsK_s. For large KsK_s (fewer but larger caches), rate changes non-trivially; the optimal depends on ΞΌ\mu.

Parameters
20
4

Example: Shared vs Dedicated at Equal Total Storage

Compare the rate of shared-cache model with Ξ›=5\Lambda = 5 caches, Ks=4K_s = 4 users per cache (K=20K = 20 total), Ms=4MM_s = 4M (per-cache size) vs the dedicated model with K=20K = 20 users each with cache MM. Use M/N=0.1M/N = 0.1.

Hybrid Models

Real deployments often blend the two: each user has a small personal cache (phone), plus access to a shared network cache (gateway). The hybrid model's analysis is:

Total aggregate cache = KMpersonal+Ξ›MsharedK M_\text{personal} + \Lambda M_\text{shared}.

Coded caching operates on two "layers": personal caches give per-user XOR gain; shared caches give cache-group-level XOR gain. The rate combines the two. Parrinello et al. (2020) treat this; the resulting rate tradeoff is a generalization of both the dedicated and shared formulas.

The practical design challenge: balance personal vs shared storage under a total-budget constraint. Shared is typically cheaper (larger caches with lower per-byte cost) but less flexible (no personalized content).

Common Mistake: The 'Caching Gain' is per Cache, not per User

Mistake:

Applying the dedicated-cache formula t=KM/Nt = KM/N to the shared model.

Correction:

In the shared model, the relevant parameter is tshared=Ξ›Ms/Nt_\text{shared} = \Lambda M_s/N β€” the cache count (not user count) times per-cache size. Each cache contributes to the coded multicast opportunity; users sharing a cache don't contribute independently. At equal aggregate storage, Ξ›Ms=KM\Lambda M_s = K M, so tshared=KM/N=tt_\text{shared} = K M/N = t. The caching gain is the same. The difference is the user-facing rate: it scales linearly in KsK_s (users per cache).