References & Further Reading
References
- S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, Plug-and-play priors for model based reconstruction, 2013
The original PnP paper. Introduces the idea of replacing the proximal operator with a denoiser in ADMM. Establishes the modularity principle that is the foundation of this chapter.
- Y. Romano, M. Elad, and P. Milanfar, The little engine that could: Regularization by denoising (RED), 2017
Introduces RED, defining an explicit regulariser from the denoiser. Provides the gradient formula and convergence analysis for Section 21.4.
- E. K. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, and W. Yin, Plug-and-play methods provably converge with properly trained denoisers, 2019
Establishes convergence conditions for PnP via Lipschitz-constrained denoisers. Provides the theoretical foundation for Section 21.3.
- K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, and R. Timofte, Plug-and-play image restoration with deep denoiser prior, 2021
Introduces DRUNet for PnP with noise-level input. Demonstrates state-of-the-art PnP results. The noise schedule and DRUNet design described in Section 21.2 follow this work.
- S. Hurault, A. Leclaire, and N. Papadakis, Gradient step denoiser for convergent Plug-and-Play, 2022
Proposes training the denoiser as the gradient of a scalar potential, ensuring PnP convergence to an objective minimum. Connects to the implicit regulariser theory of Section 21.2.
- S. Mukherjee, S. Dittmer, Z. Shumaylov, S. Lunz, O. Oktem, and C.-B. Schoenlieb
, Learned convex regularisers for inverse problems, 2021
ICNN-based regularisers for inverse problems with provable convergence. Provides the theoretical and practical basis for the ICNN treatment in Section 21.3.
- K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising, 2017
Introduces DnCNN with residual learning. The architecture and training strategy described in Section 21.2 follow this work.
- J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, and R. Timofte, SwinIR: Image restoration using Swin Transformer, 2021
Introduces SwinIR using Swin Transformer blocks for image restoration. State-of-the-art denoising quality; used in advanced PnP configurations.
- E. T. Reehorst and P. Schniter, Regularization by denoising: Clarifications and new interpretations, 2019
Clarifies the Jacobian symmetry assumption for RED and shows counterexamples when it fails. Essential reading for understanding RED's limitations discussed in Section 21.4.
- U. S. Kamilov, C. A. Bouman, G. T. Buzzard, and B. Wohlberg
, Plug-and-play methods for integrating physical and learned models in computational imaging, 2023
Tutorial survey on PnP for computational imaging. Covers applications to MRI, CT, and microscopy; relevant to the RF imaging discussion of Section 21.5.
- P. L. Combettes and J.-C. Pesquet, Proximal splitting methods in signal processing, 2011
Foundational reference for proximal operators, averaged operators, and the Krasnoselskii-Mann convergence theorem used in Section 21.3.
- S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, 2011
The standard ADMM reference. Convergence theory in Section 21.3 builds on the ADMM framework established here.
- K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering, 2007
BM3D: the benchmark non-local denoiser used in early PnP work and cited in comparisons throughout this chapter.
- B. Efron, Tweedie's formula and selection bias, 2011
Modern treatment of Tweedie's formula connecting the MMSE denoiser to the score function. Used in the historical note and score-based interpretation of RED in Section 21.4.
- R. Zhao and G. Caire, Plug-and-play reconstruction for RF imaging with Kronecker-structured sensing, 2023
CommIT group paper applying PnP-ADMM to multistatic OFDM imaging with Kronecker sensing operators. Demonstrates $O(Q\log Q)$ per-iteration complexity and convergence of non-expansive DRUNet variants.
Further Reading
For readers who want to go deeper into specific topics from this chapter.
Convergence theory for PnP and RED
E. T. Reehorst and P. Schniter, 'Regularization by denoising: clarifications and new interpretations,' IEEE Trans. Computational Imaging, vol. 5, no. 1, pp. 52–67, 2019
Clarifies the assumptions needed for RED convergence and provides counterexamples when Jacobian symmetry fails. Essential companion to Section 21.4.
Gradient-step denoisers
S. Hurault, A. Leclaire, and N. Papadakis, 'Proximal denoiser for convergent Plug-and-Play optimization with nonconvex regularization,' Proc. ICML, 2022
Extends gradient-step denoisers to non-convex regularisers and proves convergence to critical points. Bridges the theory gap between unconstrained PnP and ICNN-based methods.
PnP for computational imaging survey
U. S. Kamilov et al., 'Plug-and-play methods for integrating physical and learned models in computational imaging,' IEEE Signal Processing Magazine, vol. 40, no. 1, pp. 85–97, 2023
Tutorial-style survey covering PnP across imaging modalities (MRI, CT, microscopy). Provides context for the RF imaging application of Section 21.5.
Input-convex neural networks
B. Amos, L. Xu, and J. Z. Kolter, 'Input convex neural networks,' Proc. ICML, 2017
Foundational ICNN paper. Provides the theoretical background for the convex denoiser approach in Section 21.3.
Score matching and denoising connections
P. Vincent, 'A connection between score matching and denoising autoencoders,' Neural Computation, vol. 23, no. 7, 2011
Shows that denoising score matching trains the denoiser to estimate the score function. Bridges Section 21.4's score interpretation and Chapter 22's diffusion models.