Prerequisites & Notation
Before You Begin
This chapter requires familiarity with entropy for both discrete and continuous sources, typicality, and the lossless source coding framework from Chapter 5. Convex optimization appears prominently.
- Discrete and differential entropy (Ch 1β2)(Review ch02)
Self-check: Can you compute the differential entropy of a Gaussian random variable?
- Jointly typical sequences and the covering lemma (Ch 3)(Review ch03)
Self-check: Can you state the covering lemma and explain why we need codewords to cover the typical set?
- Lossless source coding and Shannon's theorem (Ch 5)(Review ch05)
Self-check: Can you explain why is the minimum rate for lossless compression?
- Convex optimization: Lagrange multipliers, KKT conditions
Self-check: Can you minimize a convex function subject to inequality constraints using the KKT conditions?
Notation for This Chapter
Symbols introduced in this chapter.
| Symbol | Meaning | Introduced |
|---|---|---|
| Reconstruction alphabet (may differ from source alphabet ) | s01 | |
| Per-symbol distortion measure: | s01 | |
| Target average distortion level | s01 | |
| Rate-distortion function: | s02 | |
| Distortion-rate function: the inverse of | s02 | |
| Maximum meaningful distortion () | s02 | |
| Reverse waterfilling level for vector Gaussian sources | s04 | |
| Auxiliary random variable in Wyner-Ziv coding | s05 |