Prerequisites & Notation
Before You Begin
This chapter extends Approximate Message Passing (Chapter 20) beyond the i.i.d. Gaussian regime. You should be comfortable with compressed sensing (Chapter 19), AMP and its state evolution (Chapter 20), the LMMSE estimator, and random matrix theory basics. Some exposure to neural networks will help with the learned-unfolding material in section 21.5, though we develop the key ideas from scratch.
- Approximate Message Passing (AMP): iterations, Onsager term, state evolution(Review ch20)
Self-check: Can you explain why AMP includes the term and not a plain residual iteration?
- Compressed sensing and sparse recovery: LASSO, RIP, phase transitions(Review ch19)
Self-check: Can you state the LASSO problem and describe when it recovers the true sparse signal?
- LMMSE estimator: and orthogonality principle(Review ch08)
Self-check: Can you derive the LMMSE estimator and its error covariance?
- Singular value decomposition and right/left rotational invariance
Self-check: Can you identify when a random matrix ensemble is right-rotationally invariant?
- Kronecker product and vectorization identities
Self-check: Can you verify that ?
- Gradient descent and automatic differentiation (for section 21.5)
Self-check: Can you express a single ISTA step and identify its trainable parameters?
Notation for This Chapter
Symbols introduced or emphasized in this chapter. The sensing matrix plays the central role of the linear measurement operator.
| Symbol | Meaning | Introduced |
|---|---|---|
| Sensing (measurement) matrix, | s01 | |
| Observation vector, | s01 | |
| AWGN noise vector, | s01 | |
| Pseudo-observation at iteration : , | s01 | |
| Effective noise variance at iteration | s01 | |
| Denoiser (thresholding, MMSE, or learned) at iteration | s01 | |
| Linear estimator matrix (LMMSE or pseudo-inverse) at iteration | s01 | |
| Divergence of the denoiser: | s01 | |
| SVD of sensing matrix , with singular values | s02 | |
| Precisions (inverse variances) in VAMP message passing | s02 | |
| Noiseless linear image of the signal (GAMP) | s04 | |
| Per-element likelihood (Gaussian, Poisson, 1-bit, etc.) | s04 | |
| Trainable parameters of an unrolled network | s05 |