Prerequisites & Notation

Before You Begin

This chapter extends Approximate Message Passing (Chapter 20) beyond the i.i.d. Gaussian regime. You should be comfortable with compressed sensing (Chapter 19), AMP and its state evolution (Chapter 20), the LMMSE estimator, and random matrix theory basics. Some exposure to neural networks will help with the learned-unfolding material in section 21.5, though we develop the key ideas from scratch.

  • Approximate Message Passing (AMP): iterations, Onsager term, state evolution(Review ch20)

    Self-check: Can you explain why AMP includes the term btrtβˆ’1b_t \mathbf{r}_{t-1} and not a plain residual iteration?

  • Compressed sensing and sparse recovery: LASSO, RIP, phase transitions(Review ch19)

    Self-check: Can you state the LASSO problem and describe when it recovers the true sparse signal?

  • LMMSE estimator: A=Ξ£xyΞ£yβˆ’1\mathbf{A} = \boldsymbol{\Sigma}_{xy}\boldsymbol{\Sigma}_y^{-1} and orthogonality principle(Review ch08)

    Self-check: Can you derive the LMMSE estimator and its error covariance?

  • Singular value decomposition and right/left rotational invariance

    Self-check: Can you identify when a random matrix ensemble is right-rotationally invariant?

  • Kronecker product and vectorization identities

    Self-check: Can you verify that vec(AXB)=(BTβŠ—A)vec(X)\mathrm{vec}(\mathbf{A}\mathbf{X}\mathbf{B}) = (\mathbf{B}^{\mathsf{T}} \otimes \mathbf{A})\mathrm{vec}(\mathbf{X})?

  • Gradient descent and automatic differentiation (for section 21.5)

    Self-check: Can you express a single ISTA step and identify its trainable parameters?

Notation for This Chapter

Symbols introduced or emphasized in this chapter. The sensing matrix A\mathbf{A} plays the central role of the linear measurement operator.

SymbolMeaningIntroduced
A\mathbf{A}Sensing (measurement) matrix, A∈CMΓ—N\mathbf{A} \in \mathbb{C}^{M \times N}s01
\ntnobs\ntn{obs}Observation vector, \ntnobs=Ax+w\ntn{obs} = \mathbf{A}\mathbf{x} + \mathbf{w}s01
w\mathbf{w}AWGN noise vector, w∼CN(0,Οƒ2I)\mathbf{w} \sim \mathcal{CN}(\mathbf{0}, \sigma^2\mathbf{I})s01
rt\mathbf{r}_tPseudo-observation at iteration tt: rt=x+Ο„tzt\mathbf{r}_t = \mathbf{x} + \tau_t \mathbf{z}_t, zt∼N(0,I)\mathbf{z}_t \sim \mathcal{N}(\mathbf{0},\mathbf{I})s01
Ο„t2\tau_t^2Effective noise variance at iteration tts01
Ξ·t(β‹…)\eta_t(\cdot)Denoiser (thresholding, MMSE, or learned) at iteration tts01
Wt\mathbf{W}_tLinear estimator matrix (LMMSE or pseudo-inverse) at iteration tts01
div ηt\mathrm{div}\,\eta_tDivergence of the denoiser: βˆ‘iβˆ‚Ξ·t,i/βˆ‚ri\sum_i \partial \eta_{t,i}/\partial r_is01
UΞ›VH\mathbf{U}\boldsymbol{\Lambda}\mathbf{V}^{\mathsf{H}}SVD of sensing matrix A\mathbf{A}, with singular values Ξ»i\lambda_is02
Ξ³1,Ξ³2\gamma_1, \gamma_2Precisions (inverse variances) in VAMP message passings02
z=Ax\mathbf{z} = \mathbf{A}\mathbf{x}Noiseless linear image of the signal (GAMP)s04
p(yi∣zi)p(y_i|z_i)Per-element likelihood (Gaussian, Poisson, 1-bit, etc.)s04
Θ\boldsymbol{\Theta}Trainable parameters of an unrolled networks05