Chapter Summary
Chapter 2 Summary: Ill-Posed Problems and Regularization Theory
Key Points
- 1.
Hadamard well-posedness requires existence, uniqueness, and continuous dependence on data. Most RF imaging inverse problems fail the stability condition because their forward operators are compact with decaying singular values . The degree of ill-posedness is quantified by the decay rate: polynomial gives mild ill-posedness; exponential gives severe ill-posedness.
- 2.
The Moore–Penrose pseudoinverse provides the minimum-norm least-squares solution via the SVD formula , but it is unbounded for compact operators and catastrophically amplifies noise — the Picard condition identifies exactly when is well-defined.
- 3.
Regularization replaces the unbounded with a family of bounded operators converging pointwise to as . The parameter balances approximation error (bias) against noise amplification (variance). Source conditions of order yield the minimax optimal convergence rate .
- 4.
Spectral regularization unifies TSVD, Tikhonov, and Landweber under the filter function framework . TSVD uses a sharp cutoff (infinite qualification). Tikhonov uses a smooth roll-off at (finite qualification ; closed-form solution). Landweber uses a polynomial filter with early stopping (infinite qualification; matrix-free).
- 5.
Parameter choice rules: Morozov's discrepancy principle (order-optimal when is known), the L-curve (visual heuristic, no required), GCV (asymptotically optimal, no required), and SURE (unbiased risk estimate under Gaussian noise). For the Tikhonov case, the discrepancy equation has a unique solution because the residual is monotonically increasing in .
- 6.
Variational regularization replaces the quadratic Tikhonov penalty with a task-specific functional , yielding the MAP estimate under the prior . LASSO () promotes sparsity with exact recovery under RIP conditions. Total variation () promotes piecewise-constant images with preserved edges. Group sparsity () enables joint support recovery in multi-frequency imaging.
- 7.
Nonlinear inverse problems are handled by the iteratively regularised Gauss–Newton method (IRGNM): linearise the forward operator at each step and apply Tikhonov regularisation to the linearised problem, with a decreasing sequence . Convergence rates match the linear theory under source conditions. In RF imaging, the Born iterative method is the specific instantiation using the Lippmann–Schwinger equation.
Looking Ahead
Chapter 3 develops the Bayesian framework for inverse problems in depth: from MAP estimation (the connection to variational regularization established here) to full posterior distributions, credible regions, and uncertainty quantification. Chapter 3 also treats sparsity-promoting priors (Bernoulli–Gaussian, spike-and-slab, horseshoe) that go beyond the Laplace/Tikhonov priors of this chapter, and introduces Sparse Bayesian Learning (SBL) as a bridge to message-passing algorithms. Chapter 4 then provides the computational tools — fast operators, GPU acceleration, automatic differentiation — needed to make the regularization methods of Chapters 2–3 run on real-scale RF imaging problems.