References & Further Reading

References

  1. S. Rangan, Generalized Approximate Message Passing for Estimation with Random Linear Mixing, 2011

    Foundational paper introducing GAMP and the output function $g_{\text{out}}$ for arbitrary likelihood models. Section s02 follows this work for the output channel derivations and state evolution analysis.

  2. J. P. Vila and P. Schniter, Expectation-Maximization Gaussian-Mixture Approximate Message Passing, 2013

    EM-GM-GAMP: the canonical reference for combining EM parameter learning with GAMP. Section s01 follows this work for the M-step closed-form updates. Also introduces the Gaussian mixture prior generalization of BG-GAMP.

  3. P. Schniter and S. Rangan, Compressive Phase Retrieval via Generalized Approximate Message Passing, 2015

    Applies GAMP to phase retrieval (power-only measurements) and 1-bit CS. Provides the probit output function and Mills ratio derivation used in Section s02. Also the CommIT contribution reference for Section s01.

  4. U. S. Kamilov, S. Rangan, A. K. Fletcher, and M. Unser, Approximate Message Passing with Consistent Parameter Estimation and Applications to Sparse Learning, 2014

    Proves consistency of EM-GAMP parameter estimates in the large-system limit: as $M, N \to \infty$ with $M/N \to \delta$, the EM estimates converge to the true parameters whenever they are identifiable. The theoretical foundation for Section s01's convergence guarantees.

  5. J. T. Parker, P. Schniter, and V. Cevher, Bilinear Generalized Approximate Message Passing — Part I: Derivation, 2014

    BiG-AMP: GAMP for bilinear inference problems (unknown matrix + unknown signal). Section s03 uses this for blind calibration and dictionary learning examples.

  6. A. Manoel, F. Krzakala, M. Mézard, and L. Zdeborová, Multi-Layer Generalized Linear Estimation, 2017

    Multi-layer GAMP for deep generative models: derives the state evolution for cascaded GLMs and shows how ML-GAMP achieves Bayes-optimal inference in the large-system limit. Section s03's ML-VAMP analysis is based on this work.

  7. A. K. Fletcher, S. Rangan, and P. Schniter, Inference in Deep Networks in High Dimensions, 2018

    Extends the ML-GAMP state evolution to the VAMP framework (ML-VAMP), providing a rigorous analysis for layers with orthonormal (not just i.i.d. Gaussian) weight matrices. Used for the ML-VAMP algorithm and theorem in Section s03.

  8. Q. Zou, H. Zhang, and C. Yang, A Concise Tutorial on Approximate Message Passing, 2020

    Tutorial covering AMP, GAMP, and VAMP with implementation details and numerical examples. Good companion reference for the entire Part IV (Chapters 16–19). Includes Python code for the key algorithms.

Further Reading

Selected references for readers who want to go deeper into specific topics from this chapter.

  • Turbo-GAMP for structured signal models (Markov, group sparse)

    J. Ma, X. Yuan, and L. Ping, "Turbo Compressed Sensing with Partial DFT Sensing Matrix," IEEE Signal Process. Letters, vol. 22, no. 2, pp. 158–161, 2015.

    Combines GAMP with structured signal models (Markov chains, group sparsity) via a turbo message-passing architecture. Shows how the GAMP framework from Section s01 extends to exploit temporal or spatial correlation in the scene.

  • GAMP convergence analysis (beyond i.i.d. Gaussian matrices)

    P. Schniter, "Turbo Reconstruction of Structured Sparse Signals," Proc. CISS, 2010.

    Analyzes GAMP convergence for structured (non-Gaussian) sensing matrices. Provides convergence conditions and explains why damping is necessary when the state evolution does not apply exactly.

  • Deep generative priors for inverse problems

    V. Bojanowski, A. Joulin, D. Lopez-Paz, and A. Szlam, "Optimizing the Latent Space of Generative Networks," arXiv:1707.05776, 2018.

    Alternative to ML-VAMP: directly optimizes in the latent space of a deep generator (e.g., GAN) by gradient descent on the data-consistency objective. Simpler to implement than ML-VAMP but lacks the Bayesian error bars and convergence guarantees.

  • 1-bit massive MIMO channel estimation

    C. Studer and G. Durisi, "Quantized Massive MU-MIMO-OFDM Uplink," IEEE Trans. Commun., vol. 64, no. 6, pp. 2387–2399, 2016.

    Extends the 1-bit compressed sensing framework from Section s02 to massive MIMO channel estimation with heavily quantized receivers. Shows that GAMP with probit output channel achieves near-optimal capacity even with 1-bit ADCs.

  • Score-based priors as continuous-time multi-layer models

    Y. Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole, "Score-Based Generative Modeling Through Stochastic Differential Equations," ICLR, 2021.

    The ML-VAMP generative prior from Section s03 can be seen as a discrete-layer approximation to a continuous-time diffusion model. This paper establishes the connection and is the foundation for Chapter 22's diffusion-based imaging.