GAMP β Generalized Approximate Message Passing
Beyond Gaussian Measurements
AMP, OAMP, and VAMP all assume Gaussian measurement noise: with . But many measurements in the real world are manifestly non-Gaussian: Poisson photon counts in astronomy and medical imaging, 1-bit or few-bit quantized samples from low-power ADCs, log-linear counts in spectroscopy, and logistic outcomes in classification.
GAMP (Generalized AMP) extends the AMP framework to generalized linear models (GLMs), where each measurement depends on through an arbitrary per-element likelihood . The algorithm retains AMP's two-scalar state-evolution structure but adds a new component β an output channel denoiser β that handles the non-Gaussian likelihood.
The beauty of GAMP is that it treats the nonlinearity at the measurement end and the prior at the signal end symmetrically: both are handled by local MMSE steps, one in signal space, one in measurement space.
Definition: Generalized Linear Model (GLM)
Generalized Linear Model (GLM)
A generalized linear model is a Markov chain where is the linear image of the signal, and the measurement is produced element-wise through a fixed conditional distribution
The likelihood family characterizes the GLM:
- β standard linear regression;
- β Poisson regression;
- β logistic regression;
- β 1-bit compressed sensing.
The GLM abstraction is a modelling unifier: once GAMP knows how to handle any such per-element likelihood, a vast range of inverse problems become instances of a single algorithm with a user-supplied output channel denoiser.
Definition: GAMP Iteration
GAMP Iteration
With having i.i.d. zero-mean entries with variance , GAMP iterates
The output channel denoiser is the MMSE estimate of the score of the likelihood,
where the expectation is over weighted by . The input denoiser is the familiar signal-prior MMSE estimator.
Setting recovers standard AMP with . Every other likelihood simply changes this scalar Bayesian update.
Theorem: GAMP State Evolution
For i.i.d. sub-Gaussian sensing matrices and any GLM likelihood with bounded output MMSE, GAMP's iterates admit, in the large-system limit, a scalar state evolution
where is the denoiser MSE under effective noise , and is the Fisher information of the scalar channel , .
Once again the whole algorithm collapses to two scalar numbers at each iteration β the effective noise variance on the signal side () and on the measurement side (). The presence of two variances (rather than one) reflects that GLM inference is a two-stage problem: decode the linear image from the measurements, then decode the signal from the linear image.
Conditioning and Gaussianity on each side
Treat and as AMP-style pseudo-observations in their respective spaces. The i.i.d.-Gaussian structure of ensures that, after the Onsager correction, each pseudo-observation is and .
Close the measurement side
Given the scalar channel , the MMSE of estimating from produces the transfer function, which is inversely related to the Fisher information of the channel at the current operating point.
Close the signal side
This is the standard AMP argument: the input denoiser's MSE under Gaussian noise gives . Combining the two closures gives the coupled recursion.
Example: GAMP for 1-Bit Compressed Sensing
Consider 1-bit compressed sensing: the true signal is sparse, measurements are taken as with , and observed as . Derive the output channel denoiser .
Write the likelihood
The 1-bit + Gaussian-noise channel has and , where is the standard Gaussian CDF.
Posterior of $z$ given $(y,\hat{z})$
Combining with gives a truncated-Gaussian posterior on . Its mean is
where is the Gaussian PDF.
Output denoiser
Applying the GAMP formula gives the closed-form 1-bit output denoiser. It is bounded (unlike the linear-noise denoiser) β a reflection of the fact that each 1-bit measurement carries at most 1 bit of information.
Definition: EM-GAMP for Automatic Hyperparameter Tuning
EM-GAMP for Automatic Hyperparameter Tuning
EM-GAMP interleaves GAMP iterations with EM updates of unknown hyperparameters :
where the expectation over the posterior is approximated by the marginals produced at the end of the last GAMP run. The E-step is free β GAMP gives the marginal moments β and the M-step reduces to closed-form updates for common parametric priors.
In practice one runs GAMP iterations, updates once, and repeats. Convergence is typically reached in 5-20 outer iterations. EM-GAMP is a workhorse when priors are unknown but parametric.
1-Bit Compressed Sensing with GAMP
Recover a sparse signal from 1-bit quantized measurements as a function of the oversampling ratio . Compare GAMP against the AMP baseline that treats 1-bit measurements as real-valued (a model mismatch).
Parameters
Output Denoisers for Common GLMs
Visualize the output channel denoiser as a function of for three common likelihoods: Gaussian, Poisson, and 1-bit. Notice how the 1-bit denoiser saturates while the Gaussian one is linear.
Parameters
Effective noise variance on $z$
GAMP Is More Fragile Than OAMP
Even on i.i.d. Gaussian sensing matrices, GAMP frequently diverges without damping. The two coupled transfer functions can create slowly converging or oscillating loops, particularly for heavy-tailed priors and heavily nonlinear output channels (1-bit, Poisson with low counts).
Standard remedies:
- Damping on both scalar states with factor β.
- Variance flooring: clamp away from zero.
- "Mean-only" damping: damp the vector iterates but not the precisions.
For structured sensing matrices, the RRI equivalents of GAMP ("GVAMP") are more robust and should be preferred in production.
Common Mistake: Output Channel Sign Convention
Mistake:
Confusing the sign of with the sign of the residual . For Gaussian channels they agree; for Poisson or 1-bit channels the scaling is different and the sign can differ relative to the assumed operating point.
Correction:
Derive directly from the definition and verify on a simple test case (scalar Gaussian likelihood with known SNR) that the output matches the closed-form solution. Then trust the derivation in the generalized case.
Historical Note: Rangan and the Birth of GAMP
2011-2013Sundeep Rangan's 2011 ISIT paper Generalized Approximate Message Passing for Estimation with Random Linear Mixing marked the transition of AMP from a specialized sparse-recovery algorithm to a general-purpose Bayesian inference engine. The key insight β that the Onsager-corrected residual on the measurement side could be reused for any generalized linear model β opened the door to a flood of applications in quantized sensing, phase retrieval, and machine learning.
The subsequent EM-GAMP variant (Vila & Schniter, 2013) made GAMP parameter-free, eliminating the need to specify the prior or noise level in advance. For a decade GAMP was the default engine for Bayesian compressed sensing in applications ranging from hyperspectral unmixing to quantitative MRI.
GAMP (Generalized AMP)
An extension of AMP to generalized linear models with per-element likelihoods. Adds an output-channel denoiser to handle non-Gaussian measurements (1-bit, Poisson, etc.) while preserving the scalar state-evolution structure of AMP.
Related: Generalized linear model (GLM), EM-GAMP
Generalized linear model (GLM)
A statistical model where the measurement depends on a linear image only through a fixed per-element likelihood . Generalizes linear regression to Poisson, logistic, quantized, and many other observation models.
Related: GAMP (Generalized AMP)
EM-GAMP
A variant of GAMP that alternates message-passing iterations with EM updates of the prior and noise hyperparameters. Enables automatic tuning without prior specification of , or .
Related: GAMP (Generalized AMP)
Quick Check
In GAMP, which of the following determines the output denoiser ?
The signal prior only.
The per-element measurement likelihood and the current state .
The structure of the sensing matrix .
The signal sparsity .
Correct. is the Bayesian score of the channel under the likelihood .
Why This Matters: 1-Bit GAMP in Massive MIMO
Massive MIMO uplinks with 1-bit ADCs are a rare wireless setting where a non-Gaussian likelihood dominates the receiver design. Each antenna produces a binary sample per sampling instant, representing the sign of a complex-valued received signal plus noise. Channel estimation in this regime is exactly the 1-bit compressed-sensing problem, and GAMP with the 1-bit output denoiser is competitive with much more expensive algorithms.
The CommIT group's activity-detection pipeline (Ch 14) leverages this observation: the two-step GAMP structure β output denoiser for the quantized measurement, input denoiser for the sparse user activity β gives a principled low-complexity receiver that the group has extended to unsourced random access with large numbers of potential users.
See full treatment in Chapter 14Ψ Section sec-activity-detection