Chapter Summary

Chapter 19 Summary: EM-Based Methods and Hyperparameter Estimation

Key Points

  • 1.

    EM-GAMP eliminates manual hyperparameter tuning. The algorithm wraps GAMP in an EM loop: the E-step runs GAMP to compute posterior means x^i\hat{x}_i, variances Ο„x,i\tau_{x,i}, and inclusion probabilities Ο€i\pi_i; the M-step updates Οƒ2\sigma^2, ρ\rho, and Οƒx2\sigma_x^2 in closed form. Interleaving (one GAMP step per EM step) is efficient and reaches near-oracle NMSE β€” typically within 0.1 dB β€” from arbitrary initialization.

  • 2.

    The output function goutg_{\text{out}} is GAMP's interface to any likelihood. For Gaussian noise, gout=(yβˆ’p^)/(Οƒ2+Ο„p)g_{\text{out}} = (y - \hat{p})/(\sigma^2 + \tau_p) and GAMP reduces to AMP. For 1-bit measurements, goutg_{\text{out}} is the Mills ratio (probit likelihood); for Poisson, it uses a Laplace approximation. The same GAMP loop handles all cases: only goutg_{\text{out}} changes.

  • 3.

    1-bit CS enables low-power receiver arrays. Replacing a 12-bit ADC with a single comparator reduces power from ∼12\sim 12 mW to <0.1< 0.1 mW per channel. With GAMP's probit output function, near-standard CS performance is recovered at oversampling ratios M/Nβ‰₯2M/N \geq 2–33, enabling dense receiver arrays at a fraction of the conventional hardware cost.

  • 4.

    Multi-layer VAMP integrates deep generative priors into message passing. ML-VAMP infers the latent code z(L)\mathbf{z}^{(L)} of a deep generative model from the measurements, rather than the full scene directly. When the latent dimension Jβ‰ͺNJ \ll N, reliable reconstruction is achievable at M/NM/N ratios well below the standard CS phase transition. Each layer satisfies its own VAMP state evolution, coupled through inter-layer Gaussian messages.

  • 5.

    BiG-AMP addresses bilinear problems (unknown matrix + unknown signal). Applications include blind calibration of antenna arrays, dictionary learning from scene data, and matrix completion. BiG-AMP alternates between GAMP-based signal estimation (given A^\hat{\mathbf{A}}) and GAMP-based matrix estimation (given c^\hat{\mathbf{c}}), with convergence guaranteed under mild conditions.

  • 6.

    All three methods share the same GAMP infrastructure. EM-GAMP adds an outer EM loop; GAMP for GLMs changes only goutg_{\text{out}}; ML-VAMP stacks GAMP modules in a forward-backward architecture. Building on the VAMP/OAMP implementation from Chapter 18, all three are incremental extensions of the same core algorithm.

Looking Ahead

Part IV (Chapters 16–19) is now complete: a comprehensive Bayesian message-passing toolkit for RF imaging, from factor graphs and belief propagation (Ch. 16) through AMP and VAMP (Ch. 17–18) to self-tuning EM-GAMP and multi-layer inference (Ch. 19).

Part V transitions to deep learning approaches that use neural networks to directly learn components of the inference pipeline:

  • Chapter 20: Post-processing networks (MF + U-Net) and model-based deep learning β€” the simplest integration of learned components into the imaging pipeline.
  • Chapter 21: Plug-and-Play (PnP) priors β€” replacing the proximal operator with a learned denoiser, connecting to the EM-GAMP framework through the Tweedie denoiser interpretation.
  • Chapter 22: Score-based and diffusion priors β€” the state-of-the-art approach to learned priors for RF imaging, which can be viewed as a continuous-time extension of the multi-layer generative model from Section 19.3.

The message-passing perspective developed in Part IV will reappear in Chapter 27 (deep unfolding), where AMP and VAMP iterations are "unrolled" into trainable neural networks that learn optimal denoisers and step sizes end-to-end.