References & Further Reading

References

  1. D. Ulyanov, A. Vedaldi, and V. Lempitsky, Deep image prior, 2018

    The seminal DIP paper demonstrating that untrained CNN architectures encode strong image priors through spectral bias. Establishes the early stopping phenomenon discussed in Section 23.1.

  2. R. Heckel and P. Hand, Deep decoder: Concise image representations from untrained non-convolutional networks, 2020

    Under-parameterised generator networks that avoid DIP's overfitting without early stopping. The architectural prior replaces procedural regularisation.

  3. J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, and T. Aila, Noise2Noise: Learning image restoration without clean data, 2018

    Proves that training with noisy targets achieves the same MMSE estimator as clean targets, under independent zero-mean noise.

  4. A. Krull, T.-O. Buchholz, and F. Jug, Noise2Void — Learning denoising from single noisy images, 2019

    Extends self-supervised denoising to single-image training using blind-spot networks and pixel-independent noise assumption.

  5. J. Batson and L. Royer, Noise2Self: Blind denoising by self-supervision, 2019

    Formalises self-supervised denoising via the J-invariance framework, proving that masking-based losses converge to the MMSE estimator.

  6. C. M. Stein, Estimation of the mean of a multivariate normal distribution, 1981

    The foundational result on unbiased risk estimation. Stein's lemma is the key identity behind SURE.

  7. S. Soltanayev and S. Y. Chun, Training deep learning based denoisers without ground truth data, 2018

    First application of SURE for end-to-end training of deep denoisers, providing the unsupervised framework discussed in Section 23.3.

  8. S. Ramani, T. Blu, and M. Unser, Monte-Carlo SURE: A black-box optimization of regularization parameters for general denoising algorithms, 2008

    Monte Carlo divergence estimation for SURE, enabling its use with any differentiable denoiser without closed-form Jacobian.

  9. D. Chen, J. Tachella, and M. E. Davies, Equivariant imaging: Learning beyond the range space, 2021

    Introduces equivariant imaging, showing that group symmetries provide self-supervisory signal for null-space recovery.

  10. J. Tachella, D. Chen, and M. E. Davies, Unsupervised learning for computational imaging, 2023

    Comprehensive treatment of self-supervised methods for inverse problems including EI, with convergence guarantees and applications.

  11. J. Chen, H. Chung, Y. Wang, and J. C. Ye, Reconstruct Anything Model, 2024

    Foundation model for image reconstruction trained across many forward operators, demonstrating zero-shot transfer to new inverse problems.

  12. N. Rahaman, A. Baratin, D. Arpit, F. Draxler, M. Lin, F. Hamprecht, Y. Bengio, and A. Courville, On the spectral bias of neural networks, 2019

    Theoretical foundation for the spectral bias of neural networks, showing that fully-connected and convolutional networks learn low-frequency functions first.

  13. C. A. Metzler, A. Mousavi, R. Heckel, and R. G. Baraniuk, Unsupervised learning with Stein's unbiased risk estimator, 2018

    Connects SURE to unsupervised training of compressive sensing recovery algorithms.

  14. Y. C. Eldar, Generalized SURE for exponential families: Applications to regularization, 2008

    Extends SURE to exponential family noise models, providing the GSURE framework for inverse problems.

  15. M. Z. Darestani, J. Liu, and R. Heckel, Test-time training can close the natural distribution shift performance gap in deep learning based compressed sensing, 2022

    Uses DIP-like test-time training to adapt compressed sensing reconstructions, relevant to RF imaging domain shift.

Further Reading

For readers who want to go deeper into specific topics from this chapter.

  • Spectral bias and implicit regularisation in neural networks

    N. Rahaman et al., 'On the spectral bias of neural networks,' Proc. ICML, 2019

    Provides the theoretical foundation for DIP's spectral bias via NTK analysis, explaining why CNNs learn low-frequency components before high-frequency noise.

  • Self-supervised methods for computational imaging

    J. Tachella, D. Chen, and M. E. Davies, 'Unsupervised learning for computational imaging,' Computational Imaging, MIT Press, 2023

    The most comprehensive treatment of equivariant imaging, measurement splitting, and self-supervised inverse problems with convergence guarantees.

  • Foundation models for science and engineering

    R. Bommasani et al., 'On the opportunities and risks of foundation models,' Stanford CRFM, 2022

    Broad perspective on foundation models across scientific domains, discussing the opportunities and limitations of large pretrained models for tasks beyond natural language and vision.

  • DeepInverse: A Python library for imaging inverse problems

    J. Tachella, M. Chen, D. Chen, and M. E. Davies, 'DeepInverse,' GitHub repository, 2023

    Open-source PyTorch library implementing DIP, Noise2Noise, SURE training, equivariant imaging, and many other methods discussed in this chapter. Recommended by Mike Davies (Edinburgh).

  • Implicit neural representations for inverse problems

    Y. Sun, J. Liu, M. Xie, B. Wohlberg, and U. S. Kamilov, 'CoIL: Coordinate-based internal learning for imaging inverse problems,' IEEE Trans. Computational Imaging, vol. 7, 2021

    Extends DIP using coordinate-based (NeRF-like) representations, providing a different architectural bias for single-measurement reconstruction.