References & Further Reading
References
- K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, Deep convolutional neural network for inverse problems in imaging, 2017
Seminal paper establishing the MF-to-U-Net (FBPConvNet) framework for inverse problems in imaging. Demonstrates that a U-Net applied to the filtered back-projection image achieves state-of-the-art CT reconstruction. The key insight β apply physics once to get a crude image, then denoise with a CNN β is the foundation of Section 20.1.
- H. K. Aggarwal, M. P. Mani, and M. Jacob, MoDL: Model-based deep learning architecture for inverse problems, 2019
Introduces MoDL: a network that interleaves a CNN denoiser with conjugate- gradient data-consistency steps. Weight sharing across iterations dramatically reduces parameter count. The CommIT group uses MoDL as a reference baseline for RF imaging reconstruction (Section 20.2).
- J. Schlemper, J. Caballero, J. V. Hajnal, A. N. Price, and D. Rueckert, A deep cascade of convolutional neural networks for dynamic MR image reconstruction, 2018
Cascade network for accelerated MRI that interleaves CNN blocks with hard data-consistency layers. Introduces the DC layer (keep acquired k-space, let network fill the rest) that is analysed theoretically in Section 20.2.
- O. Ronneberger, P. Fischer, and T. Brox, U-Net: Convolutional networks for biomedical image segmentation, 2015
The original U-Net paper. The encoder-decoder architecture with skip connections has become the standard backbone for image-to-image translation in reconstruction networks. Section 20.1 uses the U-Net as the post-processor.
- J. Johnson, A. Alahi, and L. Fei-Fei, Perceptual losses for real-time style transfer and super-resolution, 2016
Introduces perceptual loss (VGG feature-space distance) for image reconstruction. Shows that feature-space losses produce sharper results than MSE for super- resolution tasks. Section 20.4 uses this as one component of the combined loss for RF imaging.
- A. Rezaei, K. Zhi, T. Yang, and G. Caire, MF-to-U-Net Pipeline Analysis for Structured RF Imaging Operators, 2026
CommIT group finding that structured sensing matrices cause sidelobe corruption in the MF-to-U-Net pipeline: sidelobe artefacts and back-projected noise share the same spatial covariance (the Gram matrix), making them statistically indistinguishable from real scene features. This motivates data-consistency layers and physics-informed architectures.
Further Reading
For readers who want to go deeper into specific topics from this chapter.
Deep learning for inverse problems β comprehensive survey
S. Arridge, P. Maass, O. Oktem, and C.-B. Schoenlieb, 'Solving inverse problems using data-driven models,' Acta Numerica, vol. 28, pp. 1β174, 2019.
Comprehensive survey covering post-processing, model-based, and unrolled approaches for inverse problems. Provides the mathematical framework connecting Sections 20.1β20.3 to the broader inverse problems literature.
Data consistency in variational networks for MRI
K. Hammernik et al., 'Learning a variational network for reconstruction of accelerated MRI data,' Magnetic Resonance in Medicine, vol. 79, pp. 3055β3071, 2018.
Variational network that embeds data-consistency at every layer. Provides the MRI perspective on the physics-informed approach of Section 20.3 and the combined loss of Section 20.4.
Plug-and-play priors as a bridge between post-processing and model-based methods
S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, 'Plug-and-play priors for model based reconstruction,' Proc. IEEE GlobalSIP, 2013.
Introduces the plug-and-play framework that Chapter 21 analyses in depth. Reading this paper alongside MoDL (Aggarwal 2019) clarifies the relationship between data-consistency layers and proximal algorithms.
Deep learning for radar imaging β application survey
M. Gao, S. H. Lim, and C. Studer, 'Learning-based radar imaging,' IEEE Signal Processing Magazine, vol. 40, no. 4, 2023.
Survey on applying neural networks to radar and SAR imaging, covering the physical operator challenges discussed in Sections 20.1 and 20.3. Includes practical results for the sidelobe suppression problem.
Perceptual and adversarial losses for image reconstruction
C. Ledig et al., 'Photo-realistic single image super-resolution using a generative adversarial network,' Proc. CVPR, 2017.
SRGAN demonstrates the power and pitfalls of adversarial training for image reconstruction. The hallucination artefacts discussed in Section 20.4 are clearly visible when comparing SRGAN to bicubic baseline on texture-rich regions.