Exercises
ex-ch21-e01
EasyDefine successive refinement. Why is it useful for caching?
Base + enhancement layers.
Cache can hold base; refine on demand.
Definition
A source is SR if for . Rate adds across layers without redundancy.
Caching use
Cache base layer (widely useful); enhance on demand from server. Separates storage concerns: base in cache, enhancements in delivery.
ex-ch21-e02
EasyFor , , bitrates SD=1, HD=3, 4K=10 Mbps; compute multi-rate coded rate if all users request 4K.
$b_{\max}$
All at 4K: Mbps.
MAN factor
.
Rate
Mbps. Uncoded would be Mbps. savings.
ex-ch21-e03
MediumDerive the optimal cache split for layers (base + enhancement) with popularities .
Formula
.
Two layers
. .
Example
(base), (HD). . Weights: vs . Split: for base, for HD.
Interpretation
HD dominates because both popular and high-bitrate.
ex-ch21-e04
MediumExplain why the QoE function is concave in bitrate.
Human perception: diminishing returns at higher quality.
Psychophysical
Human vision: SD β HD is a huge perceptual jump. HD β 4K less. 4K β 8K barely visible on typical screens.
Mathematical
QoE or similar concave form. Marginal QoE per Mbps decreases.
Caching implication
Optimal cache prioritizes getting many users to HD rather than few users to 4K β matches concavity.
ex-ch21-e05
MediumChunk coded caching: Why does chunk-level work better than file-level in practice?
Subpacketization = at chunk-level vs file-level.
Subpacketization
File-level: subfile size = . For large : tiny subfiles, high header overhead.
Chunk-level
Chunk size fixed ( 5 MB). Subpacketization applies per chunk: .
Crossover
Chunk-level becomes impractical when chunk size in bits / packet size. Typically before chunks become too small.
ex-ch21-e06
HardDerive QoE gain from coded caching vs uncoded, for a user-level QoE model.
Composite: bitrate + rebuffering.
Uncoded baseline
Bandwidth serves users at each. QoE per user: - rebuffer.
Coded
Effective bandwidth . Per-user bitrate . Massive boost.
Rebuffering
Better-provisioned bandwidth β less congestion β fewer stalls. Non-linear gain in rebuffering.
Total QoE uplift
Approximately gain on bitrate component; constant gain on rebuffer penalty. Composite: roughly QoE.
ex-ch21-e07
HardRebuffering under burstiness: suppose server delivery is bursty with variance . How does coding help?
Coded multicast spreads load; fewer bursty events.
Uncoded bursty
All users see same channel variance. Rebuffer on congestion.
Coded
Multicast delivers to many users in one message. Fewer individual transmissions; less per-request variance.
Analytical
Central Limit: variance scales as for coded. Rebuffering probability : reduces by factor.
QoE benefit
Approximately: variance reduction β rebuffering reduction β QoE boost. Compound with bandwidth gain.
ex-ch21-e08
MediumWhy is DASH "pull" protocol in tension with coded "push" multicast? How can it be reconciled?
DASH pull
Client initiates each HTTP request for each chunk. Server responds per-request.
MAN push
Server initiates coded broadcast; all users decode their needed content.
Tension
Different philosophies. DASH infrastructure (CDN, HTTP/2) is unicast pull.
Reconciliation
Options: (1) MBMS parallel layer for multicast pushes; (2) HTTP/2+/3 server-push; (3) client-side cache with server-side coded injection. Each has tradeoffs in protocol complexity.
ex-ch21-e09
HardDesign a coded DASH extension. What changes to MPD / protocol?
MPD extension
Add "coded-chunk" descriptor: tells client this chunk can be received via coded multicast. List of coded-delivery opportunities.
Client behavior
On coded-chunk: tune to multicast channel + use cached XOR components to decode. Fallback to unicast if multicast fails.
Protocol
Side channel (e.g., WebSocket) for multicast metadata. Primary data via MBMS or HTTP/3 push.
Backward compat
Non-aware clients fall back to pure DASH. No disruption.
ex-ch21-e10
MediumFor a 2-hour 4K movie, compute total bandwidth to deliver to users under (i) uncoded unicast, (ii) MAN-coded cache with .
Per-user bitrate
4K Mbps. 2-hour movie: Gb = 13.5 GB.
Uncoded total
TB.
MAN coded
. Effective size TB = 45 GB.
Savings
bandwidth reduction. Enormous at scale.
ex-ch21-e11
HardLive streaming (5-second latency) vs on-demand: which benefits more from coded caching?
On-demand
Cache hit: immediate. MAN-coded delivery on miss. Large library; gain depends on popularity.
Live
Sudden concurrent demand; cache can't be pre-populated. MAN-coded delivery via multicast massively beneficial (single broadcast to all).
Verdict
Live benefits more from coded: concurrent demand is ideal for multicast. MBMS already deployed for live; coded MBMS is the natural evolution.
ex-ch21-e12
MediumDiscuss the tradeoff between chunk size and coded caching gain.
Large chunks
5-10 second chunks: easier subpacketization ( still fits). But coarse- grained startup latency.
Small chunks
1-2 second chunks: better latency but fits poorly into small chunks. Coded gain reduced.
Optimal
Chunk size balancing subpacketization feasibility with latency goals. Typical: 2-5 seconds.
ex-ch21-e13
HardShow that coded caching can reduce startup latency by factor .
Startup latency server delivery time for first chunk.
Baseline
Server sends first chunk to each of users: bandwidth . Per-user latency .
Coded
Server sends first chunk via MAN-coded multicast: bandwidth . Per-user latency .
Ratio
Startup latency improvement: factor .
Caveat
Assumes cache populated. On first-ever watch (cold cache), coded gain not yet available.
ex-ch21-e14
MediumWhat is the role of 5G MBMS in coded video caching?
MBMS
Multicast Broadcast Multimedia Services. 5G feature supporting scalable video broadcast.
Coded alignment
Natural substrate for MAN-style broadcast. Each user decodes via cached MAN subfiles.
Standards work
3GPP SA4 is studying coded-caching + MBMS integration. Standards expected 2025-2027.
Deployment
Live events (stadium, concert): immediate use case. Commercial VoD: longer horizon.
ex-ch21-e15
HardIdentify one open research problem at the video-caching intersection.
Option A: Non-concave QoE
Current analysis assumes concave QoE. Some QoE models (e.g., rebuffering-dominated) are discontinuous. Optimal caching under non-smooth QoE unclear.
Option B: Coded live streaming
Live content: no pre-caching possible. How does coded caching help? Real-time multicast encoding.
Option C: Cross-chunk dependencies
Video codecs have inter-chunk dependencies (P, B frames). Coded delivery must respect these. Open scheme design.