NDT Upper and Lower Bounds
Achievability and Converse
Having defined NDT, we now analyze the achievable bounds. The Sengupta-Tandon-Simeone 2017 paper established the first tight characterization for several regimes; subsequent CommIT work has refined the bounds for coded placement and mixed traffic.
This section presents the achievable upper bound (Lampiris-Caire- style scheme adapted to C-RAN) and the information-theoretic converse. For many regimes, the bounds coincide; for others, the gap is a factor of 2 or less.
Theorem: NDT Achievability (Upper Bound)
For the cache-aided cloud-RAN with integer , the following NDT is achievable: The scheme combines MAN-style placement at ENs with cooperative Lampiris-Caire delivery on the downlink.
Extend the Lampiris-Caire scheme from Chapter 5: the cooperating ENs act as a single transmitter with antennas. Aggregate caching gain provides an additional coded multicast boost. The downlink term captures the Lampiris- Caire DoF; the fronthaul term captures the cloud-to-EN transfer.
Placement
Each EN caches a subset of subfiles such that aggregate coverage is file copies. Specifically, run MAN with "users" at caching level , but treat each MAN-subfile as replicated across ENs in a suitable pattern.
Fronthaul phase
For non-cached content, cloud sends coded-XOR messages to ENs via fronthaul. Fronthaul rate required: files per channel use per EN.
Downlink phase
ENs act as cooperating antenna transmitters. Apply Lampiris-Caire: DoF = . Delivery time: .
Total NDT
Sum: .
Theorem: NDT Converse (Lower Bound)
For the cache-aided cloud-RAN, any achievable NDT satisfies The bound matches the upper bound when = 0 (no coded caching gain). For , the achievable upper bound is strictly tighter — the coded caching gain contributes.
Cut-set argument on the cloud-EN and EN-user boundaries. Each has a capacity; the total delivery time is at least the sum. The converse does not fully exploit the coded caching structure; the gap to achievability is the caching gain.
Cut-set argument
Two cuts: (i) cloud all ENs (bottleneck: fronthaul); (ii) all ENs all users (bottleneck: downlink). Both cuts must carry the non-cached content, of size files.
Per-cut time
Cut (i) time: . Cut (ii) time: .
Sum
Delivery time sum of per-cut times (they run in sequence in the simplest protocol). Hence .
Tightness
Tight when . For , the converse does not account for the Lampiris-Caire DoF gain; tighter converses (not in 2017 paper) close the gap partially.
Cloud-Edge Delivery Split
Fraction of delivery served from cloud (via fronthaul) vs from edge (via cache) as a function of fronthaul capacity . At low , the edge dominates; at high , the cloud takes over. The transition reflects the cache-fronthaul tradeoff.
Parameters
Example: High-Cache Regime
As cache (full library at each EN), show that NDT regardless of fronthaul .
Substitute $\mu = 1$
Upper bound: .
Interpretation
With full cache, no fronthaul is needed, and the downlink serves users via cooperative MU-MIMO. NDT = 1 is the baseline, achieved by pure MU-MIMO when fronthaul is unnecessary.
Diminishing returns
Above some threshold , further cache increases do not improve NDT. The architectural insight: oversize cache is wasted once NDT saturates at 1. Optimal depends on .
Example: High-Fronthaul Regime
As (abundant fronthaul), show that NDT , the Lampiris-Caire DoF limit.
Substitute $C \to \infty$
Fronthaul term: . Downlink term dominates.
Result
. For small and moderate , this is > 1; the downlink is the bottleneck.
Interpretation
At infinite fronthaul, the architecture reduces to Chapter 5's single-transmitter MIMO BC with effective antennas . Cache still matters, but the cloud-EN handoff is free.
Key Takeaway
NDT bounds guide architectural design. Upper bound (achievable): Lampiris-Caire-style scheme adapted to C-RAN. Lower bound (converse): cut-set argument. Tight at several operating points; in general the gap is . For deployment planning, the achievable bound is what matters — it tells you what latency your design can hit.
NDT in 5G NR Deployments
Typical 5G NR parameters and their NDT implications:
- Small cell C-RAN: , , users, files/use (abundant fronthaul at small scale). With , : NDT .
- Macro cell C-RAN: (single-site), no aggregate caching gain; NDT reduces to conventional MU-MIMO. Need more ENs for NDT benefit.
- Fog / Open RAN: , distributed caches; aggregate can reach 10+. NDT approaches 1 for moderate fronthaul.
The NDT framework provides a principled way to size the cache and fronthaul at deployment time. Production 5G operators increasingly use it in their design tooling.
- •
5G NR small-cell C-RAN: N_EN = 2-8 per cluster
- •
Fronthaul: 25-100 Gbps per EN; translates to C = 10-50 files/use at typical SNR
- •
RU cache: 10-100 GB; translates to μ = 0.01-0.1 for 1-10 TB library
- •
Target latency: <10ms downlink for typical 5G eMBB (NDT < 5 at baseline )
Common Mistake: Fronthaul Units: Files-per-Use vs Absolute
Mistake:
Confusing the symbolic fronthaul (files per channel use) with absolute fronthaul bandwidth (Gbps).
Correction:
In the NDT framework, is in files per channel use at asymptotic SNR. This normalizes the fronthaul rate to the downlink rate. Conversion: if fronthaul is Gbps and each file is bits at downlink symbol rate , then files/use.
A 10 Gbps fronthaul delivering 1 GB files over a 100 MBaud downlink: files/use. But at 100 kBaud downlink: . These are wildly different regimes; read the assumptions carefully.