Chapter Summary
Chapter Summary
Key Points
- 1.
PnP replaces the proximal step in ADMM or PGD with any off-the-shelf denoiser, exploiting the equivalence between proximal operators and MAP Gaussian denoising. This decouples the algorithm from the explicit prior and allows state-of-the-art denoisers to be plugged in without modification.
- 2.
PnP-ADMM and PnP-PGD are the two main PnP variants. PnP-ADMM is more efficient when the data-fidelity subproblem has a closed-form or FFT-based solver; PnP-PGD is simpler to implement.
- 3.
Deep denoisers (DnCNN, DRUNet, SwinIR) serve as implicit image priors that capture textures and edges beyond the reach of handcrafted priors. DRUNet with noise-level conditioning is the recommended default for PnP, enabling adaptive denoising via a decreasing noise schedule.
- 4.
Convergence requires non-expansiveness () for PnP-PGD, or strong monotonicity of for PnP-ADMM. Gradient-step denoisers and ICNN denoisers provide stronger guarantees at the cost of reduced expressivity.
- 5.
RED defines the explicit regulariser with gradient under Jacobian symmetry. RED provides an explicit objective but the Jacobian symmetry assumption is rarely exact for deep denoisers.
- 6.
For RF imaging, PnP and RED offer modular reconstruction that reuses the same denoiser across sensing geometries. Zero-shot DRUNet improves over LASSO for structured scenes; fine-tuning on simulated RF data closes an additional 2β3 dB gap over OAMP.
Looking Ahead
Chapter 22 extends learned priors to score-based diffusion models, where the MMSE denoiser from this chapter becomes the engine of a reverse diffusion process. Diffusion posterior sampling replaces the fixed denoiser in PnP with a sequence of score function evaluations, enabling posterior sampling rather than just point estimation.