Prerequisites & Notation

Before You Begin

This chapter assumes familiarity with the following topics.

  • Neural network architectures (CNNs, U-Nets) and training via backpropagation (Chapter 20) (Review ch20)

    Self-check: Can you describe the U-Net architecture and explain the role of skip connections?

  • Denoising as a building block for inverse problems; Tweedie's formula (Chapter 21) (Review ch21)

    Self-check: Can you write the Gaussian denoising MMSE estimator as a conditional expectation?

  • Generative models for imaging: score-based diffusion and DDPM (Chapter 22) (Review ch22)

    Self-check: Can you explain why diffusion models require paired or at least distribution-level training data?

  • Linear inverse problem formulation y=Ac+w\mathbf{y} = \mathbf{A}\mathbf{c} + \mathbf{w} (Chapters 12--14) (Review ch12)

    Self-check: Can you describe the forward model for RF imaging and the role of the sensing matrix A\mathbf{A}?

Notation for This Chapter

Symbols introduced or heavily used in this chapter.

SymbolMeaningIntroduced
fθ(z)f_\theta(\mathbf{z})Generator network with parameters θ\theta and fixed random input z\mathbf{z} (DIP)s01
mathbfz\\mathbf{z}Fixed random input to the DIP network (not optimised)s01
\\text{SURE}(f_\\theta)Stein's Unbiased Risk Estimate for denoiser fθf_\thetas03
\\operatorname{div}(f_\\theta)Divergence of the denoiser: i[fθ(y)]i/yi\sum_i \partial [f_\theta(\mathbf{y})]_i / \partial y_is03
TgT_gGroup transformation operator (rotation, shift, flip) for gGg \in \mathcal{G}s04
mathcalG\\mathcal{G}Group of transformations for equivariant imagings04
mathcalLtextEI\\mathcal{L}_{\\text{EI}}Equivariance losss04
mathcalLtextDC\\mathcal{L}_{\\text{DC}}Data-consistency losss04