D2D Coded Caching
Caching Without a Server: Device-to-Device Delivery
What if there is no server at all during the delivery phase? In dense networks (stadiums, concerts, urban hotspots), nearby users can communicate directly via device-to-device (D2D) links. If each user has cached parts of the file library, they can exchange missing content directly — no base station needed.
Ji, Caire, and Molisch showed that D2D coded caching achieves a remarkable throughput scaling: the per-user throughput is files per time slot, independent of the number of users . This means the system scales perfectly: adding more users does not reduce throughput, because each new user brings both a new demand and new cached content that helps others.
Definition: D2D Coded Caching Model
D2D Coded Caching Model
Consider users placed in a network, each with a cache of size bits storing content from a library of files. Communication occurs in two phases:
Placement phase: Same as the centralized model (MAN-style placement without knowledge of demands).
D2D delivery phase: Users communicate over D2D links. The network is modeled as a one-hop interference network: each user can transmit to all nearby users within distance , and transmissions from different users interfere.
The key difference from the shared-link model is that:
- There is no server; all delivery is peer-to-peer.
- Spatial reuse allows multiple simultaneous transmissions if they are sufficiently separated geographically.
- The interplay between coded multicasting gain and spatial reuse gain determines the throughput scaling.
Theorem: Throughput Scaling of D2D Coded Caching
For users uniformly distributed in a unit square, files, and cache size , with D2D communication range (ensuring neighbors per user), the per-user throughput with coded caching is:
for . This scaling is independent of and order-optimal (matching the cut-set upper bound to within a constant).
The total network throughput is , which grows linearly with the number of users.
The result combines two gains:
- Coded multicasting gain: Each D2D transmission serves users simultaneously, as in the MAN scheme.
- Spatial reuse gain: The network supports simultaneous transmissions (each with local range ).
The product gives: number of user-demands served per slot . Dividing by users, each requesting one file per slot: (delivery load per user).
After careful accounting, the throughput simplifies to . The remarkable feature is that cancels: the spatial reuse gain and the per-user demand grow at the same rate, leaving throughput independent of .
Spatial clustering
Divide the unit square into clusters of area , each containing users. Within each cluster, users can communicate via D2D links. Between clusters, simultaneous transmissions can be scheduled without interference (spatial TDMA with frequency reuse factor ).
Intra-cluster coded delivery
Within each cluster of users, apply the MAN coded caching scheme. The coded multicasting gain within the cluster is . The delivery load per cluster is .
Global throughput
There are clusters, each completing delivery in time slots. With spatial reuse (all clusters operate simultaneously), the per-user throughput is for .
D2D Coded Caching
Ji, Caire, and Molisch established the fundamental throughput scaling law for D2D networks with coded caching. The main result is that per-user throughput scales as , independent of the number of users , by combining coded multicasting within local clusters with spatial reuse across the network. This resolved the open question of whether coded caching gains survive in distributed (serverless) settings.
D2D Coded Caching: Per-User Throughput Scaling
Compare per-user throughput as a function of the number of users for: D2D coded caching, D2D uncoded caching, and centralized (server-based) delivery. Observe that D2D coded caching throughput is independent of .
Parameters
Coded Caching Architectures: Comparison
| Architecture | Server | Delivery channel | DoF / Gain | Throughput scaling |
|---|---|---|---|---|
| Shared-link (MAN) | Yes | Error-free broadcast | (total) | |
| MISO BC | Yes ( antennas) | Wireless MISO BC | (total) | |
| Fog-RAN | Cloud + edge caches | Fronthaul + wireless | Depends on fronthaul | |
| D2D | No server | D2D interference | Spatial reuse + | (per-user) |
| Uncoded baseline | Yes | Any | 1 (no coding gain) | (per-user, server) |
Example: D2D Coded Caching in a Stadium
A stadium has users, each with a 5 GB cache. The content library has popular videos of 1 GB each. Users want to watch replays of key moments. Compute:
(a) The cache ratio and coded caching parameter. (b) The per-user throughput with D2D coded caching. (c) How long to deliver one video to each requesting user.
Cache ratio
GB / 1 GB per file files. . .
Per-user throughput
files per time slot. This means approximately 20 time slots to deliver one file per user.
Delivery time estimate
If each D2D time slot is 1 ms (short-range, high-rate D2D link at 1 Gbps over 10 m), delivering 1 GB takes approximately seconds.
With uncoded caching ( as grows), delivery would require more time slots, making it impractical.
Practical caveat
The subpacketization is astronomically large, making the full MAN scheme impractical. In practice, users are clustered into groups of -, and the coded caching gain is rather than . Even with : gain , still a significant improvement.
Common Mistake: Ignoring Interference in D2D Coded Caching
Mistake:
Assuming that all D2D transmissions in the network can occur simultaneously without interference, treating the D2D network as a set of parallel links.
Correction:
D2D transmissions interfere: a user transmitting at distance from its intended receiver also interferes with users within distance of the transmitter. The throughput scaling accounts for this by using spatial TDMA (or frequency reuse) to schedule non-interfering transmissions. The spatial reuse factor is (proportional to the number of non-overlapping clusters), which exactly compensates for the per-cluster throughput reduction due to interference. Ignoring interference leads to throughput overestimates by a factor of .
Quick Check
In D2D coded caching, the per-user throughput is independent of . Why doesn't adding more users reduce throughput (as it would in a conventional system)?
Because each new user brings cached content that helps other users decode
Because D2D links have unlimited capacity
Because the server increases its transmission power
Each new user adds demand (needs one file) but also adds cache content (has files cached). The coded multicasting gain from additional caches exactly compensates for the additional demand. This is the self-scaling property of coded caching in D2D networks.
Key Takeaway
D2D coded caching achieves perfect throughput scaling. With users, each caching out of files, D2D coded delivery achieves per-user throughput independent of . This is possible because each new user contributes cached content that creates multicast opportunities, and spatial reuse allows parallel D2D transmissions. The result is order-optimal: it matches the cut-set upper bound. The practical limitation is subpacketization, addressed by user clustering at the cost of reduced multicasting gain.