Predictive Caching via Sensing

Closing the Sensing → Caching Loop

Section 19.2 showed caching freeing resources for sensing. The reverse direction is equally powerful: sensing outputs drive caching decisions.

Sensing produces vehicle locations, mobility patterns, and target identity information. These are correlated with content demand: a vehicle approaching an intersection will likely request intersection-specific map tiles; a user in a stadium will likely request replay clips of the ongoing game; a surveillance feed identifying a truck will trigger routing requests. In each case, sensing provides demand predictors with accuracy α>0\alpha > 0.

Predictive caching uses α\alpha to skew the cache placement toward predicted-likely content. The resulting hit rate and rate reduction scale with α\alpha: perfect prediction (α=1\alpha = 1) yields zero miss rate; no prediction (α=0\alpha = 0) recovers the MAN baseline.

Definition:

Predictive Cache Placement

A predictive cache placement policy takes as input:

  • Sensing outputs sS\mathbf{s} \in \mathcal{S} (e.g., user positions, velocities, target IDs).
  • A prediction function ϕ:SΔ([N])\phi: \mathcal{S} \to \Delta([N]) mapping sensing observations to a probability distribution over files.

and outputs a cache content Zk(s)\mathcal{Z}_k(\mathbf{s}) that depends on the sensing observation.

Prediction accuracy is α=Pr[ϕ(s)\alpha = \Pr[\phi(\mathbf{s}) correctly ranks dkd_k within top-MM files]].

The policy is dynamic: caches are updated as sensing information arrives. This differs from the static MAN placement (which assumes no prior knowledge of demands).

Theorem: Predictive Caching Rate Improvement

For an ISAC-cached system with prediction accuracy α[0,1]\alpha \in [0, 1], a predictive cache placement achieves expected delivery rate E[Rpred]  =  (1α)RMAN+α0  =  (1α)K(1μ)1+Kμ.\mathbb{E}[R_\text{pred}] \;=\; (1 - \alpha) \cdot R_\text{MAN} + \alpha \cdot 0 \;=\; (1 - \alpha) \cdot \frac{K(1-\mu)}{1 + K\mu}. Linear interpolation between the MAN rate (no prediction) and zero rate (perfect prediction).

With accuracy α\alpha, the cache holds the right content for a fraction α\alpha of users — no delivery needed. The remaining (1α)(1 - \alpha) fraction uses standard MAN delivery. Hence the linear reduction.

Predictive Caching Hit Rate vs Sensing Accuracy

Cache hit rate as a function of sensing-driven prediction accuracy α\alpha. Base hit rate (no prediction, α=0\alpha = 0) equals MAN baseline. Perfect prediction (α=1\alpha = 1) gives 100% hit rate.

Parameters
20
0.25

Sensing CRB vs Cache Size

Sensing CRB as a function of cache size MM. Coded caching reduces communication overhead → more resources for sensing → lower CRB. Compare to uncached baseline.

Parameters
10
50

Example: V2X Map-Tile Predictive Caching

Autonomous vehicle approaching a merge junction. Sensing reports position, velocity, heading. Map-tile demand is highly predictable from these: upcoming segments have probability 0.9 of being requested within 2 seconds. Design a predictive caching strategy.

🎓CommIT Contribution(2023)

Predictive Coded Caching with ISAC

Y. Zhou, G. CaireIEEE Transactions on Wireless Communications

The CommIT group introduced predictive coded caching driven by ISAC outputs. Key contributions:

  1. Prediction framework. Formalized the mapping from sensing observations (position, velocity, trajectory) to per-file demand probabilities ϕ(s)\phi(\mathbf{s}).
  2. Rate characterization. Showed expected delivery rate decreases linearly in prediction accuracy: E[R]=(1α)RMAN\mathbb{E}[R] = (1-\alpha) R_\text{MAN}.
  3. Adaptive placement algorithm. An online algorithm that updates cache content as sensing arrives, approaching the offline optimum within a constant factor.
  4. V2X and smart-city case studies. Demonstrated 3-10× bandwidth reduction in realistic traffic scenarios.

This work is among the first to rigorously couple ISAC outputs with coded caching, enabling the sensing → caching feedback loop central to 6G.

isacpredictivecommitView Paper →

Common Mistake: Prediction Errors Can Degrade Below MAN

Mistake:

Assuming predictive caching always outperforms static MAN.

Correction:

If the sensing predictor is miscalibrated (e.g., α<0\alpha < 0 effectively, i.e., systematically wrong), the predictive cache may perform worse than the static MAN baseline, since the cache holds rare content while common demands miss.

Robust predictive schemes mix static MAN and predictive placement: reserve a fraction β\beta of cache for MAN's combinatorial pattern (preserves worst-case guarantees) and (1β)(1-\beta) for predictive content. The combined scheme is at least as good as MAN.

Key Takeaway

Sensing-driven predictive caching linearly reduces delivery rate with prediction accuracy α\alpha. At α=1\alpha = 1: zero delivery. At α=0\alpha = 0: MAN baseline. The CommIT Zhou-Caire 2023 framework formalizes the sensing → caching direction, closing the ISAC-caching loop. Robust deployments mix static MAN and predictive placement for worst-case safety.