The D2D Caching Network Model

Users as Transmitters

In all prior chapters there was a central server (or cooperating servers) delivering content to passive users. In a D2D (device-to-device) caching network, users play both roles: each device caches content and can transmit that content to nearby neighbors. There is no base station handling delivery β€” the network is fully distributed.

The question this chapter answers: how does total system throughput scale with network size nn when each user has a cache of size MM? The answer β€” Ji-Caire-Molisch 2016 β€” is remarkable: per-user throughput scales as Θ(M/N)\Theta(M/N), independent of nn. This is fundamentally better than un-cached ad-hoc (Gupta-Kumar's Θ(1/nlog⁑n)\Theta(1/\sqrt{n\log n})) and better than infrastructure delivery with nn users served serially (Θ(1/n)\Theta(1/n) per user). Caching effectively converts the local D2D traffic into a globally distributed library.

Definition:

D2D Caching Network

A D2D caching network consists of:

  • nn single-antenna users deployed in a unit-area 2D region (e.g., uniformly at random).
  • Each user kk has a cache of size MM files from a library W\mathcal{W} of NN files.
  • Placement phase: caches are populated offline.
  • Delivery phase: users generate demands dk∈[N]d_k \in [N] (possibly randomly per demand round); each user tries to receive its demanded file from one or more nearby users' caches via short- range D2D links.
  • Interference model: the protocol model β€” at most one simultaneous transmitter within interference radius r(n)r(n) of any receiver.

No base station or centralized server participates in the delivery.

The random deployment gives the network a random geometric graph structure β€” nodes connected to neighbors within some distance. Scaling analysis depends critically on the choice of r(n)r(n): too small, connectivity breaks; too large, interference dominates.

D2D caching network topology

D2D caching network topology
nn wireless devices in a unit-area plane. Each node has a cache Zk\mathcal{Z}_k and can transmit short-range to neighbors within the interference radius r(n)r(n). No central server β€” delivery is entirely peer-to-peer.

Definition:

Random Demand Model

In the random demand model, each user kk independently generates a demand dk∈[N]d_k \in [N] according to a known popularity distribution PP. The two canonical cases:

  • Uniform demands: P(dk=n)=1/NP(d_k = n) = 1/N for all nn. Each file equally likely.
  • Zipf demands: P(dk=n)∝nβˆ’Ξ±P(d_k = n) \propto n^{-\alpha} for concentration Ξ±>0\alpha > 0. Realistic for video/web traffic.

The aggregate demand pattern {d1,…,dn}\{d_1, \ldots, d_n\} is revealed at delivery time.

Placement: Random or Designed

Placement in D2D caching has more flexibility than in shared-link MAN:

  • Random uniform placement. Each user independently caches MM random files. Simple, robust to changes; asymptotically optimal for uniform demands.
  • Popularity-aware placement. User caches reflect the Zipf distribution: cache the most popular files, with some randomness to avoid duplication across neighbors.
  • Combinatorial (MAN-style) placement. Users' caches follow a deterministic MAN pattern, enabling coded multicast in Chapter 11.

Each placement choice yields different asymptotic scaling. The Ji-Caire-Molisch result proves Θ(M/N)\Theta(M/N) scaling under random or popularity-aware placement for random demands.

Example: A Small D2D Scenario

Consider n=10n = 10 users on a 1 km Γ— 1 km area, each with cache M=10M = 10 files from a library of N=100N = 100 files (ΞΌ=0.1\mu = 0.1). Random uniform placement: each user caches 10 random files. Interference radius r=100r = 100 m (so each user has ~1 neighbor on average). Estimate hit probability and throughput.

The Role-Reversal Insight

In the shared-link model, users are passive receivers; the server does all the work. In the D2D model, users are active participants β€” they both contribute cache storage and perform delivery. This role reversal has two consequences:

  1. Aggregation effect. The collective cache of nn users is nMn M files. When nMβ‰₯NnM \geq N, the entire library is somewhere in the network. Any user's demand can in principle be served locally.
  2. Spatial reuse. Short-range D2D links don't interfere with each other beyond the interference radius. Multiple concurrent D2D transmissions can coexist.

Caching leverages (1); spatial reuse provides (2). The Ji-Caire- Molisch scaling law shows that these combine to give the Θ(M/N)\Theta(M/N) per-user throughput β€” a remarkable scaling property.

⚠️Engineering Note

D2D in Deployed Systems

D2D technology is standardized but not widely deployed as of 2026:

  1. 3GPP LTE-D2D (ProSe, Rel-12+). Direct device discovery and communication; used in public safety, limited commercial rollout.
  2. 5G NR Sidelink (Rel-16+). Vehicle-to-everything (V2X) and D2D extensions. Wide support but minimal commercial uptake beyond V2X.
  3. Wi-Fi Direct / Bluetooth Mesh. Alternative device-to-device technologies; used for file transfer and IoT meshes.

D2D caching specifically is not yet deployed at scale. The main barriers are (a) operator incentive (who gets billed for D2D traffic?), (b) privacy (users don't want to serve others), and (c) energy (D2D transmission drains battery). The theoretical gains are well-established; the business case is evolving. The CommIT group has argued for D2D caching in vehicular and aerial network contexts, where centralized infrastructure is expensive.

Practical Constraints
  • β€’

    3GPP LTE-D2D (ProSe) Rel-12+ standardized

  • β€’

    5G NR Sidelink supports V2X and device-to-device

  • β€’

    Wi-Fi Direct: peer-to-peer up to 200 Mbps

  • β€’

    Energy cost: D2D transmit ~100 mW vs cellular 500 mW