Prerequisites & Notation

Before You Begin

Approximate Message Passing (AMP) sits at the intersection of statistical physics, compressed sensing, and high-dimensional statistics. To follow the derivation and the state-evolution analysis comfortably, ensure the following background is fresh.

  • Compressed sensing: sparse recovery, RIP, basis pursuit, LASSO(Review ch17)

    Self-check: Can you state the LASSO problem and its KKT conditions?

  • ISTA / proximal gradient methods and soft-thresholding(Review ch18)

    Self-check: Can you derive the soft-threshold operator as the proximal map of λ1\lambda\|\cdot\|_1?

  • Gaussian i.i.d. random matrices and their concentration properties

    Self-check: Do you know why 1MAHAIN\frac{1}{M}\mathbf{A}^H\mathbf{A} \to \mathbf{I}_N fails in the proportional asymptotic regime N/M=δN/M = \delta?

  • MMSE estimation and scalar denoising in AWGN(Review ch06)

    Self-check: Can you compute E[XY=y]\mathbb{E}[X|Y=y] for a Bernoulli--Gaussian prior observed in Gaussian noise?

  • Belief propagation on factor graphs and the Gaussian approximation

    Self-check: Can you explain why sum-product messages on dense graphs converge to Gaussians?

  • Basic statistical physics: mean-field theory, Bethe free energy (helpful, not required)

    Self-check: Are you familiar with the cavity method?

Notation for This Chapter

AMP operates on a linear observation model y=Ax+w\mathbf{y} = \mathbf{A}\mathbf{x} + \mathbf{w} with ARM×N\mathbf{A} \in \mathbb{R}^{M \times N}, NN and MM large. We track iterates indexed by t=0,1,2,t = 0, 1, 2, \ldots

SymbolMeaningIntroduced
A\mathbf{A}Sensing / measurement matrix (M×NM \times N, typically i.i.d. Gaussian with N(0,1/M)\mathcal{N}(0, 1/M) entries)s01
xRN\mathbf{x} \in \mathbb{R}^NUnknown signal to recover (assumed sparse or structured)s01
yRM\mathbf{y} \in \mathbb{R}^MObservation vectors01
w\mathbf{w}Additive Gaussian noise, wN(0,σ2I)\mathbf{w} \sim \mathcal{N}(\mathbf{0}, \sigma^2\mathbf{I})s01
δ=M/N\delta = M/NUndersampling ratio (measurements per unknown)s01
ρ\rhoSignal sparsity fraction (x0/N\|\mathbf{x}\|_0 / N)s02
x^t\hat{\mathbf{x}}^tAMP estimate at iteration tts01
rt\mathbf{r}^tAMP residual at iteration tts01
η(;θ)\eta(\cdot;\theta)Denoiser function with parameter θ\theta (e.g., threshold, noise variance)s01
η()\langle \eta'(\cdot) \rangleEmpirical average of the denoiser's derivative (Onsager coefficient)s01
τt2\tau_t^2State-evolution variance at iteration tts02
MSE(τ2)\mathrm{MSE}(\tau^2)Scalar MSE of the denoiser in effective noise of variance τ2\tau^2s02