Prerequisites & Notation

Prerequisites for This Chapter

This chapter develops OAMP/VAMP β€” the algorithm family that replaces AMP when the sensing matrix A\mathbf{A} is not i.i.d. Gaussian. We first recap AMP and explain its failure for RF imaging operators, then build OAMP from scratch, exploit Kronecker structure for efficiency, and design denoisers ranging from classical to learned.

Prerequisites:

  • Factor Graphs and Belief Propagation β€” Factor graphs, sum-product message passing, the Gaussian BP specialization. OAMP is derived from expectation propagation on the dense CS factor graph.
  • FSI Ch 20 β€” AMP in Depth β€” The derivation of AMP, the Onsager correction, and state evolution. We recap the essentials in Section 17.1, but readers who have studied AMP in FSI will find the material more natural.
  • Factor graphs and belief propagation(Review ch16)

    Self-check: Can you write the BP messages for a Gaussian linear model?

  • AMP iteration and the Onsager correction

    Self-check: Can you write one AMP iteration and explain why the Onsager term matters?

  • Singular value decomposition (SVD)

    Self-check: Can you compute the SVD of a matrix and interpret the singular values?

  • Kronecker product and its properties

    Self-check: Can you evaluate (AβŠ—B)βˆ’1(\mathbf{A} \otimes \mathbf{B})^{-1} in terms of Aβˆ’1\mathbf{A}^{-1} and Bβˆ’1\mathbf{B}^{-1}?

  • LMMSE estimation

    Self-check: Can you write the LMMSE estimator for a linear Gaussian model?

Notation and Conventions

Symbols introduced and used in this chapter. We follow the conventions of the RF imaging forward model from Chapter 8.

SymbolMeaningIntroduced
A∈CMΓ—N\mathbf{A} \in \mathbb{C}^{M \times N}Sensing (measurement) matrixs01
c∈CN\mathbf{c} \in \mathbb{C}^NReflectivity vector (true scene)s01
y=Ac+w\mathbf{y} = \mathbf{A}\mathbf{c} + \mathbf{w}Linear imaging observation models01
Οƒ2\sigma^2Noise variances01
Ξ΄=M/N\delta = M/NMeasurement ratio (undersampling ratio)s01
c^t\hat{\mathbf{c}}^tEstimate of the reflectivity at iteration tts01
rt\mathbf{r}^tResidual at iteration tts01
Ξ·t(β‹…)\eta_t(\cdot)Denoiser at iteration tts01
Ο„t\tau_tEffective noise variance (state evolution parameter) at iteration tts01
A1,A2\mathbf{A}_{1}, \mathbf{A}_{2}Kronecker factors of the sensing matrix: A=A1βŠ—A2\mathbf{A} = \mathbf{A}_{1} \otimes \mathbf{A}_{2}s03
c^1t,c^2t\hat{\mathbf{c}}_1^t, \hat{\mathbf{c}}_2^tOutputs of the LMMSE step and denoiser step, respectivelys02
v1t,v2tv_1^t, v_2^tMSE of the LMMSE step and denoiser steps02
div(Ξ·)\text{div}(\eta)Divergence of the denoiser: 1Nβˆ‘iβˆ‚Ξ·iβˆ‚ri\frac{1}{N}\sum_i \frac{\partial \eta_i}{\partial r_i}s02