Prerequisites & Notation

Before You Begin

This chapter requires familiarity with entropy for both discrete and continuous sources, typicality, and the lossless source coding framework from Chapter 5. Convex optimization appears prominently.

  • Discrete and differential entropy (Ch 1–2)(Review ch02)

    Self-check: Can you compute the differential entropy h(X)h(X) of a Gaussian random variable?

  • Mutual information I(X;Y)I(X;Y) and its properties (Ch 1)(Review ch01)

    Self-check: Can you prove that I(X;Y)=H(X)βˆ’H(X∣Y)I(X;Y) = H(X) - H(X|Y) and that Iβ‰₯0I \geq 0?

  • Jointly typical sequences and the covering lemma (Ch 3)(Review ch03)

    Self-check: Can you state the covering lemma and explain why we need 2nI2^{nI} codewords to cover the typical set?

  • Lossless source coding and Shannon's theorem (Ch 5)(Review ch05)

    Self-check: Can you explain why H(X)H(X) is the minimum rate for lossless compression?

  • Convex optimization: Lagrange multipliers, KKT conditions

    Self-check: Can you minimize a convex function subject to inequality constraints using the KKT conditions?

Notation for This Chapter

Symbols introduced in this chapter.

SymbolMeaningIntroduced
X^\hat{\mathcal{X}}Reconstruction alphabet (may differ from source alphabet X\mathcal{X})s01
d(x,x^)d(x, \hat{x})Per-symbol distortion measure: XΓ—X^β†’[0,∞)\mathcal{X} \times \hat{\mathcal{X}} \to [0, \infty)s01
DDTarget average distortion levels01
RRRate-distortion function: R=min⁑PX^∣X:E[d]≀DI(X;X^)R = \min_{P_{\hat{X}|X} : \mathbb{E}[d] \leq D} I(X;\hat{X})s02
D(R)D(R)Distortion-rate function: the inverse of RRs02
Dmax⁑D_{\max}Maximum meaningful distortion (R=0R = 0)s02
Ξ³\gammaReverse waterfilling level for vector Gaussian sourcess04
UUAuxiliary random variable in Wyner-Ziv codings05