Design Principles for Practical D2D Caching

From Theory to Practice

The Ji-Caire-Molisch scaling law gives us a theoretical target: Θ(M/N)\Theta(M/N) per-user throughput. But several engineering decisions determine whether we approach this target in a real system. This section distills the design principles most likely to matter in a 6G D2D caching deployment.

Definition:

Placement Strategies in D2D Networks

Four commonly-studied placement strategies for D2D caching:

  1. Random uniform placement. Each user independently caches MM files uniformly at random from the library. Robust, simple; asymptotically optimal for uniform demand.
  2. Popularity-proportional. User caches most-popular files with probability proportional to popularity. Good for Zipf demand but induces high duplication.
  3. Deterministic combinatorial. Users cache according to a pre-designed pattern (MAN-style). Enables coded multicasting (Ch 11) but requires synchronization.
  4. Geo-aware placement. Users in the same geographic cluster cache coordinated content to maximize neighbor coverage. Effective in urban deployments.

The "right" choice depends on demand model, network topology, and operational constraints.

Coverage vs Diversity Tradeoff

A fundamental tradeoff in D2D caching placement:

  • Coverage. For each file, more users should cache it, so that any requester is likely to find a neighbor with that file.
  • Diversity. Across the local neighborhood, different users should cache different files, so that requests for many files can all be served.

These objectives conflict: if everyone caches the same 100 popular files (high coverage per file), the neighborhood caches only 100 distinct files (low diversity). If everyone caches disjoint sets (max diversity), each file is rarely cached (low coverage).

Optimal placement balances these. The Ji-Caire-Molisch 2016 result shows that a random slightly correlated placement β€” where neighbors' caches have some overlap but don't fully align β€” achieves the asymptotic optimum. Popularity-aware extensions (Ji-Tulino-Llorca-Caire 2017) refine the constants for Zipf demand.

Random Uniform D2D Placement

Complexity: Off-peak: nMnM file transfers. Each user's cache independent β€” no coordination required.
Input: Library {W1,…,WN}\{W_1, \ldots, W_N\}; users 1,…,n1, \ldots, n;
per-user cache size MM.
Output: Cache contents Z1,…,Zn\mathcal{Z}_1, \ldots, \mathcal{Z}_n.
1. for each user k=1,…,nk = 1, \ldots, n do
2. \quad Sample a subset FkβŠ†[N]\mathcal{F}_k \subseteq [N] uniformly
at random with ∣Fk∣=M|\mathcal{F}_k| = M.
3. Zk←{Wn:n∈Fk}\quad \mathcal{Z}_k \leftarrow \{W_n : n \in \mathcal{F}_k\}.
4. end for
5. return (Z1,…,Zn)(\mathcal{Z}_1, \ldots, \mathcal{Z}_n).

The randomness is the key design choice. Deterministic placement with matching structure (MAN-style) can give coded-multicast gains (Ch 11), but requires synchronization.

Uncoded D2D Delivery

Complexity: Matching phase: O(nlog⁑n)O(n \log n) average (expected neighbors). Scheduling phase: O(n/r(n)2)O(n/r(n)^2) non-interfering slots.
Input: User caches; demand dkd_k for user kk;
set of reachable neighbors Nk\mathcal{N}_k.
Output: Delivery of WdkW_{d_k} to user kk.
1. for each user k=1,…,nk = 1, \ldots, n do
2. \quad Find any neighbor j∈Nkj \in \mathcal{N}_k with
Wdk∈ZjW_{d_k} \in \mathcal{Z}_j.
3. \quad if such jj exists then
4. \quad\quad Schedule D2D link j→kj \to k to transmit WdkW_{d_k}.
5. \quad else
6. \quad\quad Fall back to infrastructure (cellular) delivery.
7. end for
8. Apply spatial TDMA scheduling to resolve interference.

Spatial TDMA scheduling is necessary: neighboring D2D links interfere, so one slot per interference-neighborhood. Implementations use distance-based grouping or graph coloring.

⚠️Engineering Note

Implementing D2D Caching

Key implementation decisions for D2D caching:

  1. Neighbor discovery. ProSe / Sidelink provides this via periodic beacons. Cost: energy and spectrum.
  2. Cache state advertisement. Users advertise their cache contents (typically hashes) so neighbors can request. Privacy concern: reveals user preferences. Chapter 12 addresses this.
  3. Link scheduling. Distributed coordination (CSMA) or BS-assisted (signaling-heavy) approaches.
  4. Cache refresh. How often is cache content updated? Affects coverage over time as content catalog changes.
  5. Payment / incentives. Users who serve others pay energy and bandwidth; they need incentives. Token-based and credit schemes exist.
  6. Security. Malicious users could serve incorrect content. Integrity checks (HMAC, signatures) are standard.

Most of these are not theoretical bottlenecks β€” they're operational ones. Production-quality D2D caching will depend on infrastructure operator buy-in and user-side incentives. The information-theoretic scaling is already proven.

Practical Constraints
  • β€’

    3GPP ProSe (Rel-12+): device discovery and communication standardized

  • β€’

    Battery drain: D2D transmit adds ~50-100 mW to typical mobile usage

  • β€’

    Security: cache content integrity requires signatures / HMAC

  • β€’

    Privacy: advertising cache contents reveals interests

Common Mistake: Don't Forget Cache Refresh Cost

Mistake:

Analyzing D2D caching as a static placement without accounting for refresh overhead.

Correction:

Real libraries churn over time: new content arrives, old content leaves. D2D caches must refresh periodically. Refresh can be done via infrastructure (BS pushes updates) or D2D (users exchange cache updates). The refresh overhead cost is: RefreshΒ bitsΒ perΒ round∼nMβ‹…churnΒ raterefreshΒ period.\text{Refresh bits per round} \sim \frac{n M \cdot \text{churn rate}}{\text{refresh period}}. If refresh overhead exceeds delivery savings, the caching gain is lost. Practical systems use refresh periods matched to library turnover (e.g., daily for video, hourly for news).

Quick Check

In a D2D caching network of nn users each with cache ratio ΞΌ=M/N\mu = M/N, per-user throughput scales as:

Θ(1/n)\Theta(1/n)

Θ(1/nlog⁑n)\Theta(1/\sqrt{n \log n})

Θ(μ)=Θ(M/N)\Theta(\mu) = \Theta(M/N)

Θ(log⁑n)\Theta(\log n)