The D2D Caching Network Model
Users as Transmitters
In all prior chapters there was a central server (or cooperating servers) delivering content to passive users. In a D2D (device-to-device) caching network, users play both roles: each device caches content and can transmit that content to nearby neighbors. There is no base station handling delivery β the network is fully distributed.
The question this chapter answers: how does total system throughput scale with network size when each user has a cache of size ? The answer β Ji-Caire-Molisch 2016 β is remarkable: per-user throughput scales as , independent of . This is fundamentally better than un-cached ad-hoc (Gupta-Kumar's ) and better than infrastructure delivery with users served serially ( per user). Caching effectively converts the local D2D traffic into a globally distributed library.
Definition: D2D Caching Network
D2D Caching Network
A D2D caching network consists of:
- single-antenna users deployed in a unit-area 2D region (e.g., uniformly at random).
- Each user has a cache of size files from a library of files.
- Placement phase: caches are populated offline.
- Delivery phase: users generate demands (possibly randomly per demand round); each user tries to receive its demanded file from one or more nearby users' caches via short- range D2D links.
- Interference model: the protocol model β at most one simultaneous transmitter within interference radius of any receiver.
No base station or centralized server participates in the delivery.
The random deployment gives the network a random geometric graph structure β nodes connected to neighbors within some distance. Scaling analysis depends critically on the choice of : too small, connectivity breaks; too large, interference dominates.
D2D caching network topology
Definition: Random Demand Model
Random Demand Model
In the random demand model, each user independently generates a demand according to a known popularity distribution . The two canonical cases:
- Uniform demands: for all . Each file equally likely.
- Zipf demands: for concentration . Realistic for video/web traffic.
The aggregate demand pattern is revealed at delivery time.
Placement: Random or Designed
Placement in D2D caching has more flexibility than in shared-link MAN:
- Random uniform placement. Each user independently caches random files. Simple, robust to changes; asymptotically optimal for uniform demands.
- Popularity-aware placement. User caches reflect the Zipf distribution: cache the most popular files, with some randomness to avoid duplication across neighbors.
- Combinatorial (MAN-style) placement. Users' caches follow a deterministic MAN pattern, enabling coded multicast in Chapter 11.
Each placement choice yields different asymptotic scaling. The Ji-Caire-Molisch result proves scaling under random or popularity-aware placement for random demands.
Example: A Small D2D Scenario
Consider users on a 1 km Γ 1 km area, each with cache files from a library of files (). Random uniform placement: each user caches 10 random files. Interference radius m (so each user has ~1 neighbor on average). Estimate hit probability and throughput.
Hit probability
Probability a user's demand is in its own cache: . Probability a user's demand is in a single neighbor's cache: . With one neighbor, total local hit probability (union bound).
Multiple neighbors
If the interference radius were larger ( m, neighbors on average), hit probability rises to . More neighbors = higher hit rate.
Throughput scaling
Per-user throughput , where is the single-link D2D rate. As at fixed density, throughput per user remains β independent of . This is the main result of Ch. 10.
Engineering interpretation
A D2D caching network of 1000 devices delivers the same per-user rate as a D2D network of 10 devices (at fixed density and ). Caching effectively scales with the network: more users means more cached content floating around.
The Role-Reversal Insight
In the shared-link model, users are passive receivers; the server does all the work. In the D2D model, users are active participants β they both contribute cache storage and perform delivery. This role reversal has two consequences:
- Aggregation effect. The collective cache of users is files. When , the entire library is somewhere in the network. Any user's demand can in principle be served locally.
- Spatial reuse. Short-range D2D links don't interfere with each other beyond the interference radius. Multiple concurrent D2D transmissions can coexist.
Caching leverages (1); spatial reuse provides (2). The Ji-Caire- Molisch scaling law shows that these combine to give the per-user throughput β a remarkable scaling property.
D2D in Deployed Systems
D2D technology is standardized but not widely deployed as of 2026:
- 3GPP LTE-D2D (ProSe, Rel-12+). Direct device discovery and communication; used in public safety, limited commercial rollout.
- 5G NR Sidelink (Rel-16+). Vehicle-to-everything (V2X) and D2D extensions. Wide support but minimal commercial uptake beyond V2X.
- Wi-Fi Direct / Bluetooth Mesh. Alternative device-to-device technologies; used for file transfer and IoT meshes.
D2D caching specifically is not yet deployed at scale. The main barriers are (a) operator incentive (who gets billed for D2D traffic?), (b) privacy (users don't want to serve others), and (c) energy (D2D transmission drains battery). The theoretical gains are well-established; the business case is evolving. The CommIT group has argued for D2D caching in vehicular and aerial network contexts, where centralized infrastructure is expensive.
- β’
3GPP LTE-D2D (ProSe) Rel-12+ standardized
- β’
5G NR Sidelink supports V2X and device-to-device
- β’
Wi-Fi Direct: peer-to-peer up to 200 Mbps
- β’
Energy cost: D2D transmit ~100 mW vs cellular 500 mW