Prerequisites & Notation
Before You Begin
This chapter is research-level. It synthesises ideas from Bayesian estimation, hypothesis testing, information theory, and wireless signal processing. If any of the following feels rusty, the chapter will feel faster than its prerequisites allow.
- Fisher information and the Cramer-Rao bound(Review FSI Ch. 18)
Self-check: Can you state the CRLB for a scalar parameter and explain why it may be loose at low SNR?
- Bayesian MMSE estimation and the posterior mean(Review FSI Ch. 7-8)
Self-check: Can you derive the MMSE estimator for a Gaussian prior and linear Gaussian observation?
- Binary hypothesis testing, minimum probability of error(Review FSI Ch. 1)
Self-check: Can you write the minimum error probability between two equi-prior Gaussians with means and common variance ?
- Mutual information and differential entropy(Review ITA Ch. 2-3)
Self-check: Can you compute when with ?
- Convex optimization and KKT conditions(Review Telecom Ch. 3)
Self-check: Can you set up a convex problem with linear constraints and recognise when KKT is sufficient?
Notation for This Chapter
Symbols introduced in this chapter. Van Trees, Ziv-Zakai, and I-MMSE use overlapping notation in the literature; we follow the conventions that minimise collisions with the rest of the book.
| Symbol | Meaning | Introduced |
|---|---|---|
| Scalar (or vector) parameter to be estimated, treated as random under a Bayesian prior | s01 | |
| Prior density on (must vanish at the boundary of its support for Van Trees) | s01 | |
| (Classical, frequentist) Fisher information at parameter value | s01 | |
| Prior information: | s01 | |
| Bayesian information: | s01 | |
| Minimum probability of error between two equi-prior hypotheses | s02 | |
| Ziv-Zakai bound on MSE | s02 | |
| Scalar signal-to-noise ratio parameter in the I-MMSE identity | s03 | |
| MMSE of estimating from where | s03 | |
| Transmit sample covariance matrix in ISAC, | s04 |