Prerequisites & Notation
Before You Begin
This chapter assumes familiarity with the following topics. If any item feels unfamiliar, revisit the linked material first.
- Linear imaging model and the matched filter (Chapter 13) (Review ch13)
Self-check: Can you explain why is not a true inverse and what the Gram matrix encodes?
- AMP and OAMP for RF imaging, state evolution (Chapter 17) (Review ch17)
Self-check: Can you describe the role of a denoiser in OAMP and explain why the effective noise at each iteration is approximately Gaussian?
- Learned OAMP and deep unfolding — LISTA, unrolled algorithms, generalization bounds (Chapter 18) (Review ch18)
Self-check: Can you sketch the computational graph of an unrolled ISTA and identify which parameters are learned?
- U-Net encoder-decoder architecture, skip connections, convolutional blocks (external — Sci_Python Part VI or equivalent)
Self-check: Can you describe the encoder-decoder structure of a U-Net and explain how skip connections preserve fine spatial detail?
Notation for This Chapter
Symbols introduced or heavily used in this chapter. Notation tokens follow the RFI book defaults and are user-customisable via the Notation Preferences panel.
| Symbol | Meaning | Introduced |
|---|---|---|
| Forward (sensing) operator / matrix | s01 | |
| Adjoint (matched filter) operator | s01 | |
| Discretized scene reflectivity vector | s01 | |
| Matched-filter (back-projected) image: | s01 | |
| Gram matrix (discrete point-spread function) | s01 | |
| Neural network with trainable parameters | s01 | |
| Training loss function | s04 | |
| Perceptual loss (VGG feature-space distance) | s04 | |
| Adversarial (GAN) loss | s04 | |
| Learned step size for data-consistency updates in MoDL | s02 |