Mismatch Analysis
What Happens When the Prior Is Wrong?
The BG-MMSE denoiser is Bayes-optimal when the prior is correctly specified β i.e., the scene is truly Bernoulli-Gaussian with the assumed parameters . In practice, the prior is always wrong: the true scene may have a different sparsity, a different amplitude distribution, or structure that no parametric prior captures (extended targets, spatial correlations).
How robust is OAMP to such mismatch? Can we still use state evolution to predict performance under a mismatched denoiser? This section answers both questions.
Definition: Mismatched State Evolution
Mismatched State Evolution
Let be the true prior and be the assumed prior used to design the denoiser . The mismatched state evolution for OAMP is:
where the expectation is over (the true prior) and . The LMMSE step is unchanged (it does not depend on the prior).
The key point: state evolution is still valid under mismatch. It tracks the actual algorithm, not the intended algorithm. But the MSE is degraded because is not optimal for .
Mismatched state evolution enables offline analysis of robustness: sweep over possible true priors while keeping the denoiser fixed, and plot the resulting MSE.
Theorem: MSE Degradation Under Prior Mismatch
Let be the fixed-point MSE of OAMP with the matched (Bayes-optimal) denoiser for the true prior , and let be the fixed-point MSE with the mismatched denoiser designed for . Then
with equality if and only if equals the Bayes-optimal denoiser for almost everywhere.
Moreover, the MSE degradation is bounded by
where the supremum is over noise levels, and the second factor accounts for the amplification through the LMMSE step.
The mismatched denoiser is suboptimal at each iteration β it removes less noise than the Bayes-optimal denoiser. This excess MSE feeds back into the LMMSE step, which sees a larger prior variance, producing a noisier output, creating a vicious cycle. The bound quantifies how much the per-step suboptimality is amplified by the iterative loop.
Per-step MSE comparison
At any noise level , the MMSE denoiser minimizes the MSE by definition: . The gap is the per-step excess MSE.
Fixed-point shift
The state evolution map with the mismatched denoiser is . Since (the matched map), the fixed point shifts up: .
Bounding the shift via sensitivity analysis
By the implicit function theorem applied to the fixed-point equation, the shift is bounded by the maximum per-step gap divided by the stability margin .
Example: Sparsity Mismatch in BG-MMSE
The true scene is Bernoulli-Gaussian with sparsity and variance . The denoiser uses an assumed sparsity . Compute the OAMP fixed-point NMSE via mismatched state evolution for .
Parameters: , , (Kronecker sensing with partial DFT).
Mismatched SE computation
For each , we compute the BG-MMSE denoiser with assumed sparsity and evaluate its MSE under the true prior with :
| NMSE (dB) | Gap from optimal | |
|---|---|---|
| 0.01 | dB | |
| 0.05 | dB | |
| 0.10 | dB (matched) | |
| 0.15 | dB | |
| 0.20 | dB | |
| 0.30 | dB |
Asymmetry of mismatch
Underestimating the sparsity () is much more damaging than overestimating it. At , the denoiser aggressively kills components, missing many true scatterers. At , it is too conservative (preserves noise), but the penalty is milder.
Practical guideline
When the true sparsity is uncertain, it is safer to overestimate by a factor of 1.5β2x. Alternatively, use EM-GAMP (Chapter 19) to learn and from the data.
OAMP Performance Under Prior Mismatch
Explore how OAMP's NMSE degrades when the assumed sparsity or signal variance differs from the truth. The dashed line shows the Bayes-optimal NMSE.
Parameters
Minimax and Robust Denoisers
When no reliable prior information is available, one can use a minimax denoiser that minimizes the worst-case MSE over a class of signals (e.g., the ball of radius ). The minimax denoiser for AWGN is the James-Stein estimator:
which shrinks toward zero by a data-dependent factor.
Minimax denoisers are conservative: they sacrifice performance when the prior is favorable in exchange for robustness when the prior is unfavorable. In RF imaging, this tradeoff is usually not worthwhile β the scene statistics are often well-characterized from training data, and a learned denoiser outperforms minimax by a large margin.
Online Parameter Learning via EM
Rather than choosing prior parameters offline, they can be learned from the measurements using an EM (expectation- maximization) approach interleaved with the OAMP iterations:
- E-step: Run one OAMP iteration with current parameters .
- M-step: Update and using the posterior statistics from the denoiser output.
This is essentially EM-GAMP (Chapter 19) applied to the OAMP framework. It converges in 5β10 outer EM iterations for typical RF imaging problems, adding minimal overhead.
- β’
EM converges to a local maximum of the marginal likelihood; multiple initializations may be needed
- β’
For very low SNR or very high undersampling, the EM landscape may have spurious local optima
Common Mistake: Signal Variance Mismatch Can Be Worse Than Sparsity Mismatch
Mistake:
Carefully tuning the sparsity parameter but using a default signal variance without checking whether it matches the actual signal power.
Correction:
The BG-MMSE denoiser depends on both and . A 10x error in can cause 5β8 dB of MSE degradation, comparable to a 5x error in . Always estimate both parameters, either from training data or via EM within OAMP.
Quick Check
Mismatched state evolution for OAMP:
Still accurately predicts the empirical MSE of the mismatched algorithm
Overestimates the MSE because it assumes worst-case mismatch
Is invalid because the orthogonality condition requires a matched prior
State evolution tracks the actual algorithm behavior, regardless of whether the denoiser is optimal. It predicts the MSE of whatever denoiser is used.
Key Takeaway
OAMP is moderately robust to prior mismatch: a 2x error in the sparsity parameter costs about 1β2 dB, while a 10x error can cost 5β8 dB. Underestimating sparsity is more harmful than overestimating it. Mismatched state evolution remains valid and enables offline robustness analysis. When prior parameters are uncertain, EM-based learning or learned denoisers provide the most robust reconstructions.