OAMP/VAMP for the RF Imaging Operator

The OAMP Idea β€” Orthogonalize, Then Denoise

AMP fails for structured matrices because the residual is correlated with the estimate. The fix is conceptually simple: replace the scalar Onsager correction with a linear operator that explicitly orthogonalizes the error.

OAMP (Orthogonal AMP) achieves this by alternating two steps:

  1. LMMSE step: A linear estimator that uses the full SVD of A\mathbf{A} to produce an estimate whose error is uncorrelated with the input.
  2. Denoiser step: A (possibly nonlinear) denoiser that exploits the prior on c\mathbf{c}.

The orthogonality between the errors of the two steps is the key invariant that enables state evolution for arbitrary right-rotationally-invariant matrices β€” a much larger class than i.i.d. Gaussian.

Definition:

Right-Rotationally-Invariant Matrices

A random matrix A∈CMΓ—N\mathbf{A} \in \mathbb{C}^{M \times N} is right-rotationally invariant if its distribution is unchanged under right multiplication by any unitary matrix:

A=dAUforΒ allΒ unitaryΒ U.\mathbf{A} \stackrel{d}{=} \mathbf{A}\mathbf{U} \qquad \text{for all unitary } \mathbf{U}.

Equivalently, the SVD A=ULΞ£VH\mathbf{A} = \mathbf{U}_L \boldsymbol{\Sigma} \mathbf{V}^H has V\mathbf{V} uniformly distributed on the unitary group (Haar-distributed), independent of Ξ£\boldsymbol{\Sigma}.

I.i.d. Gaussian matrices are right-rotationally invariant, but the class is much larger. For instance, any matrix of the form A=BV\mathbf{A} = \mathbf{B}\mathbf{V} with V\mathbf{V} Haar and B\mathbf{B} deterministic is right-rotationally invariant β€” this includes matrices with arbitrary prescribed singular values.

Historical Note: The Parallel Development of OAMP and VAMP

2017–2019

OAMP and VAMP were developed independently and nearly simultaneously. Ma and Ping (2017) derived OAMP by imposing a divergence-free condition on the linear estimator β€” they showed that the LMMSE estimator with a specific bias subtraction produces orthogonal errors. Rangan, Schniter, and Fletcher (2019) derived VAMP from the expectation consistency framework, arriving at an equivalent algorithm with a more symmetric structure. The two formulations are now understood to be equivalent for right-rotationally invariant matrices; we use "OAMP" throughout this chapter but note the correspondence.

,

Definition:

OAMP Algorithm

Let A=UΞ£VH\mathbf{A} = \mathbf{U}\boldsymbol{\Sigma}\mathbf{V}^H be the (compact or full) SVD. OAMP alternates:

LMMSE step (linear estimator):

c^1t=Wt y+(Iβˆ’WtA) c^2tβˆ’1,\hat{\mathbf{c}}_1^t = \mathbf{W}_t\,\mathbf{y} + (\mathbf{I} - \mathbf{W}_t\mathbf{A})\, \hat{\mathbf{c}}_2^{t-1},

where Wt=v2tβˆ’1 AH(Οƒ2I+v2tβˆ’1 AAH)βˆ’1\mathbf{W}_t = v_2^{t-1}\,\mathbf{A}^{H} (\sigma^2\mathbf{I} + v_2^{t-1}\, \mathbf{A}\mathbf{A}^{H})^{-1} is the LMMSE filter with prior variance v2tβˆ’1v_2^{t-1} (the MSE from the previous denoiser step).

MSE of LMMSE step:

v1t=1Ntr[v2tβˆ’1Iβˆ’v2tβˆ’1 AH(Οƒ2I+v2tβˆ’1AAH)βˆ’1A v2tβˆ’1].v_1^t = \frac{1}{N}\text{tr}\bigl[ v_2^{t-1}\mathbf{I} - v_2^{t-1}\,\mathbf{A}^{H} (\sigma^2\mathbf{I} + v_2^{t-1}\mathbf{A}\mathbf{A}^{H})^{-1} \mathbf{A}\,v_2^{t-1}\bigr].

Denoiser step (nonlinear estimator):

c^2t=Ξ·t ⁣(c^1t),\hat{\mathbf{c}}_2^t = \eta_t\!\bigl( \hat{\mathbf{c}}_1^t\bigr),

v2t=1Nβˆ₯c^2tβˆ’c^1tβˆ₯2β€…β€Š/β€…β€Š(1βˆ’div(Ξ·t))2,v_2^t = \frac{1}{N}\|\hat{\mathbf{c}}_2^t - \hat{\mathbf{c}}_1^t\|^2 \;\Big/\;\bigl(1 - \text{div}(\eta_t)\bigr)^2,

where div(Ξ·t)=1Nβˆ‘iβˆ‚[Ξ·t(c^1t)]iβˆ‚[c^1t]i\text{div}(\eta_t) = \frac{1}{N}\sum_{i} \frac{\partial [\eta_t(\hat{\mathbf{c}}_1^t)]_i} {\partial [\hat{\mathbf{c}}_1^t]_i}.

,

OAMP for Linear Inverse Problems

Complexity: Dominated by the SVD (computed once): O(min⁑(M,N)β‹…MN)O(\min(M,N) \cdot MN). Per iteration: O(MN)O(MN) for the LMMSE matrix-vector product.
Input: y\mathbf{y}, A\mathbf{A}, Οƒ2\sigma^2,
denoiser Ξ·t\eta_t, max iterations TT
Preprocessing: Compute SVD
A=UΞ£VH\mathbf{A} = \mathbf{U}\boldsymbol{\Sigma}\mathbf{V}^H;
store singular values {si}\{s_i\}
Initialize: c^20=0\hat{\mathbf{c}}_2^0 = \mathbf{0},
v20=E[∣c∣2]v_2^0 = \mathbb{E}[|\mathbf{c}|^2] (prior variance)
for t=1,2,…,Tt = 1, 2, \ldots, T:
1. LMMSE step:
Wt=v2tβˆ’1AH(Οƒ2I+v2tβˆ’1AAH)βˆ’1\mathbf{W}_t = v_2^{t-1}\mathbf{A}^{H} (\sigma^2\mathbf{I} + v_2^{t-1}\mathbf{A}\mathbf{A}^{H})^{-1}
c^1t=Wty+(Iβˆ’WtA)c^2tβˆ’1\hat{\mathbf{c}}_1^t = \mathbf{W}_t\mathbf{y} + (\mathbf{I} - \mathbf{W}_t\mathbf{A}) \hat{\mathbf{c}}_2^{t-1}
v1t=1Nβˆ‘i=1min⁑(M,N)Οƒ2 v2tβˆ’1Οƒ2+v2tβˆ’1 si2+Nβˆ’MN v2tβˆ’1β‹…1N>Mv_1^t = \frac{1}{N}\sum_{i=1}^{\min(M,N)} \frac{\sigma^2\,v_2^{t-1}} {\sigma^2 + v_2^{t-1}\,s_i^2} + \frac{N - M}{N}\,v_2^{t-1} \cdot \mathbb{1}_{N > M}
2. Denoiser step:
c^2t=Ξ·t(c^1t)\hat{\mathbf{c}}_2^t = \eta_t(\hat{\mathbf{c}}_1^t)
dt=div(Ξ·t)d_t = \text{div}(\eta_t)
(analytical or via Hutchinson estimator)
v2t=v1tβ‹…dt/(1βˆ’dt)v_2^t = v_1^t \cdot d_t / (1 - d_t)
3. Convergence check:
If ∣c^2tβˆ’c^2tβˆ’1∣/∣c^2t∣<Ξ΅|\hat{\mathbf{c}}_2^t - \hat{\mathbf{c}}_2^{t-1}| / |\hat{\mathbf{c}}_2^t| < \varepsilon, stop.
Output: c^2T\hat{\mathbf{c}}_2^T

The SVD is computed once as preprocessing. All subsequent LMMSE steps use the precomputed singular values and vectors to evaluate the Woodbury-form inverse efficiently.

,

Theorem: Orthogonality of OAMP Error Vectors

Let A\mathbf{A} be right-rotationally invariant with singular values {si}\{s_i\}. Define the errors e1t=c^1tβˆ’c\mathbf{e}_1^t = \hat{\mathbf{c}}_1^t - \mathbf{c} and e2t=c^2tβˆ’c\mathbf{e}_2^t = \hat{\mathbf{c}}_2^t - \mathbf{c}. Then in the large-system limit (M,Nβ†’βˆžM, N \to \infty with M/Nβ†’Ξ΄M/N \to \delta):

1Ne1t H e2tβˆ’1β†’a.s.0.\frac{1}{N}\mathbf{e}_1^{t\,H}\,\mathbf{e}_2^{t-1} \xrightarrow{\text{a.s.}} 0.

Moreover, the LMMSE output c^1t\hat{\mathbf{c}}_1^t satisfies

c^1tβ‰ˆdc+v1t z,z∼CN(0,I),\hat{\mathbf{c}}_1^t \stackrel{d}{\approx} \mathbf{c} + \sqrt{v_1^t}\,\mathbf{z}, \qquad \mathbf{z} \sim \mathcal{CN}(\mathbf{0}, \mathbf{I}),

so the denoiser at each iteration faces a scalar AWGN channel with known noise variance v1tv_1^t.

The LMMSE step is designed so that its output error is orthogonal to the input (the denoiser output from the previous iteration). This is the direct analogue of the Onsager decorrelation in AMP, but achieved through a matrix operation (the LMMSE filter) rather than a scalar correction. Because the orthogonalization uses the full SVD of A\mathbf{A}, it works for any singular value distribution β€” not just the Marchenko-Pastur distribution of i.i.d. Gaussian matrices.

,

Theorem: State Evolution for OAMP

For right-rotationally invariant A\mathbf{A} with empirical singular value distribution converging to a deterministic limit, the MSE of OAMP is tracked by the two-step state evolution:

v1t=(1Nβˆ‘i=1Msi2Οƒ2+v2tβˆ’1 si2)βˆ’1βˆ’v2tβˆ’1+Nβˆ’MN v2tβˆ’1,v_1^t = \Bigl(\frac{1}{N}\sum_{i=1}^{M} \frac{s_i^2}{\sigma^2 + v_2^{t-1}\,s_i^2}\Bigr)^{-1} - v_2^{t-1} + \frac{N - M}{N}\,v_2^{t-1},

v2t=mmse(Ξ·t,v1t),v_2^t = \text{mmse}(\eta_t, v_1^t),

where mmse(Ξ·t,v1t)=E[∣ηt(C0+v1t Z)βˆ’C0∣2]\text{mmse}(\eta_t, v_1^t) = \mathbb{E}[|\eta_t(C_0 + \sqrt{v_1^t}\,Z) - C_0|^2] is the MSE of the denoiser on a scalar AWGN channel with noise variance v1tv_1^t.

State evolution for OAMP involves two scalar quantities (v1tv_1^t for the LMMSE step and v2tv_2^t for the denoiser step) rather than one. The first equation depends on the full singular value distribution of A\mathbf{A} (not just Ξ΄=M/N\delta = M/N as in AMP). This is why OAMP can handle structured matrices.

,

OAMP State Evolution vs Simulation

Compare the state-evolution prediction (dashed) with the empirical NMSE of OAMP (solid) for different matrix types and problem sizes. State evolution becomes exact as NN grows.

Parameters
500
0.4
0.1
25

AMP vs OAMP/VAMP

PropertyAMPOAMP/VAMP
Matrix requirementI.i.d. GaussianRight-rotationally invariant
Decorrelation mechanismScalar Onsager correctionLMMSE orthogonalization (matrix-valued)
State evolutionOne scalar: Ο„t+1=f(Ο„t)\tau_{t+1} = f(\tau_t)Two scalars: v1t,v2tv_1^t, v_2^t
SE depends onΞ΄=M/N\delta = M/N onlyFull singular value distribution of A\mathbf{A}
PreprocessingNoneSVD of A\mathbf{A} (one-time)
Per-iteration costO(MN)O(MN)O(MN)O(MN)
Denoiser requirementLipschitz, scalar divergenceSame β€” plus divergence estimation
Bayes-optimal?Yes (for i.i.d. A\mathbf{A})Yes (for RRI A\mathbf{A})
RF imaging matricesDivergesConverges

Example: OAMP on a 2D RF Imaging Problem

Consider a 2D RF imaging scene with N1=N2=32N_1 = N_2 = 32 (so N=1024N = 1024 voxels), observed by M1=M2=20M_1 = M_2 = 20 Tx-Rx pairs and frequencies (M=400M = 400 measurements). The sensing matrix is A=A1βŠ—A2\mathbf{A} = \mathbf{A}_{1} \otimes \mathbf{A}_{2} with partial DFT factors. The reflectivity is Bernoulli-Gaussian with ρ=0.08\rho = 0.08 and SNR=30 dB\text{SNR} = 30\,\text{dB}.

Run OAMP with BG-MMSE denoiser and compare the reconstruction with the backpropagation image c^BP=AHDβˆ’1y\hat{\mathbf{c}}^{\text{BP}} = \mathbf{A}^{H}\mathbf{D}^{-1}\mathbf{y}.

Why This Matters: OAMP in ISAC Systems

In integrated sensing and communications (ISAC) systems, the base station uses the same OFDM waveform for both data transmission and scene reconstruction. The sensing matrix A\mathbf{A} inherits the Kronecker structure from the array geometry and OFDM subcarrier spacing. OAMP is the natural reconstruction algorithm because:

  • The Kronecker structure makes AMP inapplicable.
  • The LMMSE step exploits the known array/frequency structure.
  • State evolution enables offline optimization of the denoiser and system parameters.

Chapter 32 applies OAMP to a full ISAC scenario with simultaneous communication and imaging.

See full treatment in Chapter 32

Quick Check

Which of the following matrices is right-rotationally invariant?

A\mathbf{A} with i.i.d. CN(0,1/M)\mathcal{CN}(0, 1/M) entries

A partial DFT matrix (random rows of the DFT)

BV\mathbf{B}\mathbf{V} where B\mathbf{B} is deterministic and V\mathbf{V} is Haar-distributed

OAMP (Orthogonal Approximate Message Passing)

An iterative algorithm for linear inverse problems that alternates an LMMSE step (linear, exploiting the measurement matrix) with a denoiser step (nonlinear, exploiting the signal prior). The LMMSE step orthogonalizes the estimation errors, enabling state evolution for right-rotationally invariant matrices.

Related: OAMP Algorithm, Orthogonality of OAMP Error Vectors

VAMP (Vector Approximate Message Passing)

An equivalent formulation of OAMP derived from the expectation consistency framework (Rangan, Schniter, Fletcher, 2019). VAMP uses a symmetric two-denoiser structure and is provably Bayes-optimal for right-rotationally invariant matrices.

Related: The Parallel Development of OAMP and VAMP

Denoiser divergence

The normalized trace of the Jacobian of the denoiser: div(Ξ·)=1Nβˆ‘i=1Nβˆ‚Ξ·i(r)βˆ‚ri\text{div}(\eta) = \frac{1}{N}\sum_{i=1}^N \frac{\partial \eta_i(\mathbf{r})}{\partial r_i}. Required for computing the MSE update in OAMP. Can be estimated via the Hutchinson trace estimator for black-box denoisers.

Related: OAMP Algorithm