Exercises
ex-cc-ch10-01
EasyState the Ji-Caire-Molisch per-user throughput scaling law for D2D caching networks and its key assumptions.
Result: independent of .
Answer
under: (i) random uniform deployment in unit area, (ii) random uniform demand, (iii) protocol interference model with radius , (iv) fixed memory ratio , .
ex-cc-ch10-02
EasyFor users, files, files, state the expected number of neighbors caching a given file (at a typical user) under random uniform placement.
Expected neighbors caching a given file: .
Compute
With , expected . Each neighbor caches 25% of the library. Expected neighbors caching any fixed file: β likely to find at least one.
ex-cc-ch10-03
EasyCompare throughput scaling of D2D caching, Gupta-Kumar ad-hoc, and infrastructure cellular in one paragraph.
vs vs .
Answer
D2D caching: per-user , constant in β the only scheme whose per-user throughput doesn't decay. Gupta- Kumar ad-hoc (no caching): , vanishing. Infrastructure cellular (no caching): , vanishing faster. D2D caching is the scalable architecture at dense network size.
ex-cc-ch10-04
EasyWhy is the scaling law "independent of "? Intuitively, what balances as grows?
Supply and demand scale together.
Answer
As grows: (i) aggregate cache grows as ; (ii) demand grows as ; (iii) spatial reuse grows as . All three scale linearly with . The per-user throughput = (spatial reuse Γ hit probability Γ link capacity) / stays constant: . Supply, demand, and parallelism scale together, giving flat per-user scaling.
ex-cc-ch10-05
EasyWhat does the protocol interference model assume, and why does it matter for the scaling analysis?
Interference radius .
Answer
Protocol model: two simultaneous transmissions interfere if receivers are within distance of either transmitter. This gives a combinatorial non-interference structure (graph coloring). The scaling law depends on : choosing balances connectivity (not too small) against interference (not too large). The result assumes this optimal scaling.
ex-cc-ch10-06
MediumHit probability analysis. Derive the hit probability for a user in a random uniform D2D network with cache ratio and expected neighbors.
Hit = any neighbor caches the file.
Union bound / inclusion-exclusion.
Single-neighbor
Given a neighbor caches a random -fraction of the library, probability it caches the requested file is . Probability it doesn't: .
Multi-neighbor
With independent neighbors, probability none caches the file: . Probability at least one does (hit): .
Asymptotics
As , . For , hit probability . Since , even modest gives near-certain hits at large .
Tightness
This analysis ignores neighbor correlation (same neighbors may cache same files). The Ji-Caire-Molisch analysis accounts for this correctly.
ex-cc-ch10-07
MediumConstant factor. Under the Ji-Caire-Molisch analysis, what constant multiplies in the per-user throughput (within the notation)?
Depends on interference radius and link capacity.
Approximation
Throughput constant involves the hit probability, the number of simultaneous non-interfering transmissions per unit area, and the per-link rate. Explicit form (simplified): where is expected neighbors.
Operational
For typical parameters (5 GHz band, 10 m range, 100 Mbps/link), can be 0.1-1. The per-user rate is then Mbps.
ex-cc-ch10-08
MediumZipf popularity. How does the throughput change under Zipf demand vs uniform? Assume .
Zipf concentration improves hit rate (more users want popular files).
Ji-Tulino-Llorca-Caire 2017 extension.
Uniform demand
Hit rate . Throughput: .
Zipf demand with $\alpha > 0$
Hit rate scales with (generalized harmonic). For , this exceeds β concentration helps caching.
Throughput
Throughput: . For : , which can exceed for small . Zipf demand benefits D2D caching disproportionately.
ex-cc-ch10-09
MediumNetwork size crossover. For what does D2D + caching start outperforming infrastructure? Give an order-of-magnitude estimate for , typical link rates.
Set .
Crossover equation
D2D: . Infrastructure: . Assume . Crossover: .
Numerical
. For . D2D wins for densities of ~30-100 users sharing a resource.
Implication
At stadium/urban density (1000+ users/cell), D2D is the overwhelmingly preferred architecture. At small-office or rural density, infrastructure is better.
ex-cc-ch10-10
MediumCache refresh economics. With library churn rate 10% per day and GB per user cache, users, compute the daily cache-refresh bandwidth per user.
Refresh = 10% Γ cache size, daily.
Compute
Daily refresh per user: . Total across users: .
Rate
Distributed over 8 hours (off-peak): GB/hr = Mbps aggregate. Per user average: Mbps β negligible compared to peak delivery rates.
Conclusion
Cache refresh is practically affordable even with high churn rates. The main constraint is energy (wakeup / radio on) for mobile devices, not bandwidth.
ex-cc-ch10-11
HardConverse proof sketch. Prove an upper bound on per-user D2D throughput: .
Cut-set around any user: caches + neighbors provide limited information.
Without caching contribution, Gupta-Kumar applies.
Cut-set
Consider any user . Its caching contribution (own cache) provides fraction of demands locally (no network transmission). Remaining demands must come via D2D, subject to Gupta-Kumar scaling.
Combining
. Achievability (Ji-Caire-Molisch) matches up to constants.
Interpretation
The scaling dominates for fixed and large . The term vanishes; cache dominates.
ex-cc-ch10-12
HardHybrid D2D/infrastructure optimization. Under mixed demand (popularity-concentrated + long-tail), derive the optimal split of demand between D2D delivery and infrastructure backup.
Popular files served by D2D; unpopular by infrastructure.
Setup
Fraction of demand is popular (covered by D2D with hit rate ). Fraction is unpopular (fallback to infrastructure). Aggregate demand rates: , .
Balance
Infrastructure bandwidth is fixed; D2D scales with . Optimize (cache-placement policy) to minimize infrastructure load subject to capacity constraint. Typically: cache popular content, let unpopular go via infrastructure.
Caire-group work
See Ji-Tulino-Llorca-Caire 2017 for exact analysis under Zipf popularity. Hybrid schemes beat pure D2D or pure infrastructure.
ex-cc-ch10-13
ChallengePhysical interference model. Re-derive the D2D scaling under an SINR-based physical interference model (instead of protocol). Does the scaling survive?
SINR gives a continuous rather than binary interference cap.
Conditions under which the scaling survives: density, path-loss exponent.
SINR model
Under SINR with path-loss exponent , the equivalent interference radius scales as β same as protocol. Analysis generalizes: holds.
Edge cases
Dense scatterer environments (small ) can change the scaling; typically (urban) gives similar results to protocol model.
Research
Physical-model analysis is technically involved; Ji-Caire- Molisch's Theorem 1 is stated under protocol, with extensions to SINR in follow-up CommIT papers.
ex-cc-ch10-14
ChallengeUser selfishness. In a real D2D deployment, users may refuse to serve others (selfish behavior). How does this affect the scaling-law result?
If fraction of users are selfish, effective network size shrinks.
Setup
Let = fraction of selfish users (refuse to serve). Effective serving population: . Cache coverage reduced by same factor.
Scaling
Per-user throughput: . Still in order; constant multiplied by .
Implications
Up to ~50% selfishness, D2D caching still delivers substantial throughput. Beyond that, infrastructure takes over. Incentive design (token schemes, reduced billing for servers) aims to keep selfishness low.
Open
Game-theoretic analysis of D2D with strategic users is active research. The coded-caching community has touched on this; full treatment beyond scope.
ex-cc-ch10-15
ChallengeMobility effects. When users are mobile (neighbors change over time), does the scaling law still hold? How does mobility change the constants?
Mobility can be an advantage (content spreads) or disadvantage (neighbors transient).
Without cache refresh
If user mobility is faster than cache refresh, new arrivals don't have relevant cached content. Hit rate degrades. Scaling can even fall to Gupta-Kumar if mobility is extreme.
With adaptive caching
Mobile users can adapt caches based on location-dependent demand. The mobility mixing spreads popular content network-wide. Scaling holds with similar constants.
Regime analysis
- Slow mobility (humans walking): negligible effect.
- Moderate (cars): holds with constant degradation.
- Fast (high-speed rail): may fall out of scaling regime.
Research direction
Mobility-aware D2D caching is an active 6G research area. Digital twins and location-prediction may enable proactive caching that survives mobility.