Chapter Summary
Chapter Summary
Key Points
- 1.
The multivariate Gaussian is fully specified by its mean and covariance. The PDF involves the precision matrix in the exponent, and the constant-density contours are ellipsoids whose axes align with the eigenvectors of .
- 2.
Marginals are Gaussian. If is partitioned into , then — simply read off the corresponding block.
- 3.
Conditionals are Gaussian with Schur complement formulas. where is affine in and does not depend on .
- 4.
Affine transformations preserve Gaussianity. . Every linear combination of Gaussian components is scalar Gaussian.
- 5.
Uncorrelated independent for Gaussians. This is the defining structural advantage of the Gaussian family: second-order analysis (decorrelation, PCA, whitening) achieves full statistical independence.
- 6.
The whitening transform maps to . It is the multivariate analogue of standardization and is the first step in many detection and estimation algorithms.
- 7.
Chi-squared and Wishart distributions arise from quadratic functions of Gaussians. The Mahalanobis distance . The sample covariance matrix follows a Wishart distribution.
- 8.
The proper complex Gaussian models baseband noise and Rayleigh fading. Circular symmetry (vanishing pseudo-covariance) means the distribution is invariant to phase rotation. All real Gaussian properties carry over.
Looking Ahead
Chapter 9 develops generating functions and transforms — the moment generating function and characteristic function — as systematic tools for analyzing sums and limits of random variables. The multivariate characteristic function derived here will be the starting point for proving the multivariate Central Limit Theorem in Chapter 11, which explains why the Gaussian distribution appears so ubiquitously.