Prerequisites & Notation

Before You Begin

This chapter extends information measures to continuous random variables. We assume mastery of the discrete case from Chapter 1 and basic familiarity with continuous probability.

  • Entropy, mutual information, KL divergence (Chapter 1)(Review ita/ch01)

    Self-check: Can you state the information inequality and its consequences?

  • Probability density functions, expectation, variance

    Self-check: Can you compute E[g(X)]\mathbb{E}[g(X)] for a continuous RV with PDF fXf_X?

  • Gaussian distribution: PDF, moments, moment-generating function

    Self-check: Can you write the PDF of N(μ,σ2)\mathcal{N}(\mu, \sigma^2) and compute its variance?

  • Multivariate Gaussian: joint PDF, covariance matrix, conditional distributions

    Self-check: Can you write the joint PDF of N(μ,Σ)\mathcal{N}(\boldsymbol{\mu}, \boldsymbol{\Sigma}) and state the conditional distribution formula?

  • Integration by parts, change of variables

    Self-check: Can you evaluate 0xexdx\int_0^\infty x e^{-x} dx?

  • Determinants and positive definite matrices(Review telecom/ch01)

    Self-check: Can you compute the determinant of a 2×22 \times 2 covariance matrix?

Notation for This Chapter

Symbols introduced in this chapter. See Chapter 1 for the discrete information measures that carry over.

SymbolMeaningIntroduced
fX(x)f_X(x)Probability density function (PDF) of continuous RV XXs01
h(X)h(X)Differential entropy: f(x)logf(x)dx-\int f(x) \log f(x)\,dxs01
h(XY)h(X|Y)Conditional differential entropys01
N(μ,σ2)\mathcal{N}(\mu, \sigma^2)Gaussian distribution with mean μ\mu and variance σ2\sigma^2s02
Σ\boldsymbol{\Sigma}Covariance matrix Σ=E[(Xμ)(Xμ)T]\boldsymbol{\Sigma} = \mathbb{E}[(\mathbf{X} - \boldsymbol{\mu})(\mathbf{X} - \boldsymbol{\mu})^T]s03
N(X)N(X)Entropy power: N(X)=12πe22h(X)N(X) = \frac{1}{2\pi e} 2^{2h(X)}s04
XΔX^\DeltaQuantized version of XX with bin width Δ\Deltas05