Denoiser Design for Imaging
The Denoiser Is the Knob You Turn
OAMP separates the reconstruction problem into two independent modules: the LMMSE step (which depends only on and ) and the denoiser step (which depends only on the signal prior). The LMMSE step is fixed once the measurement system is designed. All the modeling flexibility lives in the denoiser.
This modularity is OAMP's greatest strength: we can plug in any denoiser β from classical soft thresholding to a pre-trained neural network β without modifying the rest of the algorithm. State evolution still tracks the performance, provided the denoiser satisfies mild regularity conditions (Lipschitz continuity, convergent divergence).
Definition: Soft-Thresholding Denoiser
Soft-Thresholding Denoiser
The soft-thresholding denoiser with threshold is
applied component-wise. For complex-valued , this shrinks the magnitude toward zero while preserving the phase.
Divergence: β the fraction of active (non-thresholded) components.
Soft thresholding is the MAP denoiser for a Laplace prior (). The threshold should be set as a function of the effective noise variance ; the SE-optimal choice is for sparsity .
Definition: Bernoulli-Gaussian MMSE Denoiser
Bernoulli-Gaussian MMSE Denoiser
For the Bernoulli-Gaussian prior , the Bayes-optimal (posterior mean) denoiser for the observation is
Divergence:
which has a closed-form expression involving the posterior probability of activity .
The BG-MMSE denoiser achieves the Bayes-optimal MSE when the prior is correctly specified. It is the default choice for OAMP in RF imaging when the scene is known to be sparse.
Theorem: Bayes-Optimal OAMP
When the denoiser is the posterior mean (MMSE denoiser) under the true prior , OAMP achieves the Bayes-optimal MSE β the minimum MSE achievable by any estimator β among all algorithms that use only the singular values of (not the specific structure of ) and operate iteratively.
Formally, the OAMP state evolution fixed point satisfies
where is the LMMSE MSE at the fixed point, and this matches the replica prediction from statistical physics.
OAMP with the optimal denoiser extracts all the information that any iterative algorithm can extract from the singular value distribution of and the prior . The only way to do better is to exploit specific structure in the right singular vectors (which OAMP treats as Haar-random by assumption).
State evolution fixed-point equations
At the fixed point, (LMMSE MSE as a function of prior variance) and (denoiser MSE). With the MMSE denoiser, .
Connection to replica prediction
The replica method from statistical physics predicts the same fixed-point equations for the Bayes-optimal MSE of any right-rotationally invariant sensing system. OAMP's state evolution matches this prediction, confirming its optimality.
From Hand-Crafted to Learned Denoisers
The modularity of OAMP invites a powerful idea: replace the hand-crafted denoiser with a neural network trained for image denoising. The LMMSE step ensures that the denoiser input is approximately β a standard Gaussian denoising problem. Any denoiser trained to remove Gaussian noise at level can be plugged in.
Popular choices:
- DnCNN (Zhang et al., 2017): A residual CNN trained to predict the noise. Fast, effective, and well-suited to natural images.
- U-Net: An encoder-decoder architecture with skip connections, effective for images with multi-scale structure.
- DRUNet (Zhang et al., 2021): A noise-level-aware U-Net that accepts as an input channel, enabling a single network to denoise at all noise levels encountered during OAMP iterations.
The Hutchinson estimator (DHutchinson Trace Estimator) provides the divergence needed for the MSE update, since the network Jacobian is not available in closed form.
OAMP with Learned Denoisers for RF Imaging
The CommIT group demonstrated that replacing the BG-MMSE denoiser in OAMP with a trained DnCNN improves RF image reconstruction by 3β5 dB NMSE for scenes with non-sparse structure (e.g., extended targets, smooth surfaces).
Key findings:
- A noise-level-conditional DnCNN, trained on a dataset of RF reflectivity maps, consistently outperforms BG-MMSE for non-sparse scenes.
- For truly sparse scenes (few point scatterers), BG-MMSE matches or slightly outperforms the learned denoiser β the prior is correctly specified and hard to beat.
- The Hutchinson divergence estimator introduces negligible overhead () compared to the denoiser evaluation itself.
- State evolution with the empirical MSE of the learned denoiser accurately predicts the algorithm's convergence trajectory.
This work bridges the classical message-passing framework of this chapter with the deep unfolding approach of Chapter 27.
Denoiser Comparison in OAMP
Compare OAMP convergence with different denoisers: soft thresholding, BG-MMSE, and a simulated learned denoiser. The learned denoiser excels for extended (non-sparse) targets.
Parameters
Denoiser Properties for OAMP
| Denoiser | Prior assumption | Divergence | Best for | Limitation |
|---|---|---|---|---|
| Soft threshold | Laplace (sparse) | Closed form (fraction active) | Very sparse signals | Bias on large coefficients; suboptimal threshold |
| BG-MMSE | Bernoulli-Gaussian | Closed form | Sparse scenes with known statistics | Assumes specific parametric prior |
| Minimax | Worst-case over ball | Analytical | Unknown prior, robust recovery | Conservative; ignores prior structure |
| DnCNN | Learned from data | Hutchinson estimate | Natural images, extended targets | Requires training data; generalization to unseen scenes |
| U-Net / DRUNet | Learned from data (noise-aware) | Hutchinson estimate | Multi-scale structure, varying noise | Larger model; slower per iteration |
Example: Choosing a Denoiser for an RF Imaging Scenario
An ISAC base station images an urban scene containing:
- 3 strong point scatterers (vehicles) at unknown positions,
- Extended building facades producing distributed reflections,
- Background clutter.
The scene has voxels, measurements, .
Which denoiser should be used in OAMP?
Scene characteristics
The scene is not purely sparse β it contains extended targets alongside point scatterers. A Bernoulli-Gaussian prior is a poor model for the building facades.
Denoiser choice
A noise-level-conditional U-Net (DRUNet) trained on a dataset of urban RF reflectivity maps is the best choice:
- It handles both point scatterers (which it learns to preserve) and extended structures (which it smooths appropriately).
- The noise-level input adapts the denoiser behavior across OAMP iterations.
Fallback
If no training data is available, BG-MMSE with conservatively chosen sparsity (, higher than the true point-scatterer fraction) provides a reasonable baseline. It will over-smooth the building facades but correctly recover the point scatterers.
OAMP Iteration β LMMSE and Denoiser Alternating
Historical Note: D-AMP β The Bridge to Learned Denoisers
2016β2017The idea of plugging a generic denoiser into AMP was pioneered by Metzler, Maleki, and Baraniuk (2016) as D-AMP (Denoising-based AMP). They showed that BM3D β a non-local means denoiser originally designed for natural image denoising β could be used as the AMP denoiser, dramatically improving reconstruction quality for natural images.
D-AMP used the Stein divergence estimator to compute the Onsager correction for the black-box denoiser. However, D-AMP inherited AMP's limitation to i.i.d. Gaussian matrices. The combination of D-AMP's denoiser flexibility with OAMP's matrix generality yields the learned OAMP framework described in this section.
Common Mistake: The Denoiser Must Be Matched to the Effective Noise Level
Mistake:
Using a denoiser trained at a fixed noise level for all OAMP iterations, even though the effective noise changes at each iteration (typically decreasing from a large initial value to a small final value).
Correction:
The denoiser must be noise-level-aware. Options:
- Train a separate denoiser for each noise level (wasteful).
- Use a noise-level-conditional network (DRUNet): feed as an extra input channel.
- Use a denoiser with an explicit noise parameter (BG-MMSE: the parameter is set to at each iteration).
A mismatched noise level causes the denoiser to either over-smooth (if it thinks the noise is too high) or leave residual noise (if it thinks the noise is too low), degrading the OAMP fixed point.
Quick Check
OAMP with BG-MMSE denoiser achieves NMSE = dB on a sparse Bernoulli-Gaussian scene. Replacing BG-MMSE with a learned DnCNN denoiser is most likely to improve performance when:
The scene is truly Bernoulli-Gaussian and the prior parameters are known
The scene contains extended targets (building facades, ground surface) not well modeled by a sparse prior
The SNR is very high (>40 dB)
Learned denoisers excel when the true scene statistics deviate from the parametric prior assumed by analytical denoisers. Extended targets violate the sparsity assumption and are better handled by a network trained on realistic scenes.
Key Takeaway
OAMP's modularity allows any denoiser to be plugged in: from classical soft thresholding (which gives LASSO) to Bayes-optimal BG-MMSE (which achieves the information-theoretic limit for sparse signals) to learned neural denoisers (which handle realistic, non-sparse RF scenes). The choice of denoiser is the primary modeling decision in OAMP-based RF imaging.