Chapter Summary
Chapter Summary
Key Points
- 1.
The Bayesian framework treats as a random variable with a prior . Bayes' rule produces the posterior , which summarizes all information about after observing .
- 2.
The MAP estimator returns the posterior mode; the MMSE estimator returns the posterior mean. Under a flat prior, MAP reduces to the MLE. Under a Gaussian posterior, MAP and MMSE coincide.
- 3.
The MMSE estimator is the conditional mean: . This is proved via the orthogonality principle, which says the residual is uncorrelated with every function of the observation.
- 4.
The LMMSE estimator, restricted to affine functions of , has closed form , requiring only second-order statistics. Its error covariance is .
- 5.
For jointly Gaussian , the conditional mean is affine, so MMSE = LMMSE = MAP. This fact is responsible for the ubiquity of linear receivers in wireless communications.
- 6.
Pilot-based channel estimation illustrates the Bayesian advantage: the LS estimator is the BLUE when no prior is available, while the MMSE estimator strictly outperforms it whenever an informative channel covariance is known, with the largest gains at low SNR.
- 7.
Orthogonality is the unifying principle: whether proving MMSE = conditional mean, deriving the LMMSE normal equations, or verifying that the Kalman filter innovation is white, one repeatedly invokes "the residual is uncorrelated with what we've already used".
Looking Ahead
The next chapter develops the EM algorithm, which extends Bayesian estimation to problems where the posterior contains latent variables or mixture components and therefore no closed form exists. From Chapter 9 onward the LMMSE viewpoint is carried into the frequency domain (Wiener filters) and into recursive estimation (Kalman filters) — all of them orthogonality-principle calculations in disguise.