Prerequisites & Notation
Prerequisites for Chapter 19
This chapter extends the AMP/VAMP framework from Chapters 17β18 in two directions: (1) unknown hyperparameters β EM-GAMP estimates noise variance and sparsity from data automatically; (2) non-Gaussian likelihoods β GAMP handles 1-bit, Poisson, and power-only measurements via a general output channel . A third extension, multi-layer inference, connects message passing to deep generative priors.
- AMP and OAMP/VAMP (Chapter 17)(Review ch17)
Self-check: Can you write the AMP iteration, state evolution recursion, and name one advantage of VAMP over AMP for structured matrices?
- Expectation-Maximization (EM) algorithm
Self-check: Can you state the E-step and M-step for a latent variable model and explain why EM is guaranteed to not decrease the log-likelihood?
- Bernoulli-Gaussian prior and MMSE denoiser(Review ch17)
Self-check: Can you compute the posterior inclusion probability and posterior mean for a Bernoulli-Gaussian prior given a noisy observation?
- Generalized linear models (GLM)
Self-check: Can you write the GLM likelihood for Gaussian, Poisson, and logistic output channels?
Notation for Chapter 19
Symbols used throughout this chapter. The sensing model carries through all three sections with different likelihood structures.
| Symbol | Meaning | Introduced |
|---|---|---|
| Linear mixing output (pre-likelihood) at measurement | s01 | |
| Output channel (generalized likelihood) | s01 | |
| Output function: MMSE of given and Gaussian cavity | s01 | |
| Input function (prior denoiser): MMSE of given | s01 | |
| Unknown hyperparameters: noise variance, sparsity, signal variance | s01 | |
| EM estimate of hyperparameters at outer iteration | s01 | |
| Posterior inclusion probability: | s01 | |
| GAMP extrinsic variances at output and input sides, iteration | s01 | |
| Standard Gaussian CDF (used in probit / 1-bit likelihood) | s02 | |
| Hidden activations at layer of a multi-layer generative model | s03 | |
| Linear mixing matrix at layer | s03 |