Chapter Summary

Chapter 2 Summary: Ill-Posed Problems and Regularization Theory

Key Points

  • 1.

    Hadamard well-posedness requires existence, uniqueness, and continuous dependence on data. Most RF imaging inverse problems fail the stability condition because their forward operators are compact with decaying singular values σk0\sigma_k \to 0. The degree of ill-posedness is quantified by the decay rate: polynomial σkkp\sigma_k \sim k^{-p} gives mild ill-posedness; exponential σkeckβ\sigma_k \sim e^{-ck^\beta} gives severe ill-posedness.

  • 2.

    The Moore–Penrose pseudoinverse provides the minimum-norm least-squares solution via the SVD formula Ay=kσk1y,ukvk\mathcal{A}^\dagger y = \sum_k \sigma_k^{-1} \langle y, u_k\rangle v_k, but it is unbounded for compact operators and catastrophically amplifies noise — the Picard condition identifies exactly when Ay\mathcal{A}^\dagger y is well-defined.

  • 3.

    Regularization replaces the unbounded A\mathcal{A}^\dagger with a family of bounded operators RαR_\alpha converging pointwise to A\mathcal{A}^\dagger as α0\alpha \to 0. The parameter α\alpha balances approximation error (bias) against noise amplification (variance). Source conditions of order μ\mu yield the minimax optimal convergence rate O(δ2μ/(2μ+1))O(\delta^{2\mu/(2\mu+1)}).

  • 4.

    Spectral regularization unifies TSVD, Tikhonov, and Landweber under the filter function framework Rα=Fα(AA)AR_\alpha = F_\alpha(\mathcal{A}^*\mathcal{A})\mathcal{A}^*. TSVD uses a sharp cutoff (infinite qualification). Tikhonov uses a smooth roll-off σ2/(σ2+α)\sigma^2/(\sigma^2+\alpha) at σα\sigma \approx \sqrt{\alpha} (finite qualification μ0=2\mu_0 = 2; closed-form solution). Landweber uses a polynomial filter with early stopping (infinite qualification; matrix-free).

  • 5.

    Parameter choice rules: Morozov's discrepancy principle (order-optimal when δ\delta is known), the L-curve (visual heuristic, no δ\delta required), GCV (asymptotically optimal, no δ\delta required), and SURE (unbiased risk estimate under Gaussian noise). For the Tikhonov case, the discrepancy equation has a unique solution because the residual φ(α)=Axαδyδ2\varphi(\alpha) = \|\mathcal{A}x_\alpha^\delta - y^\delta\|^2 is monotonically increasing in α\alpha.

  • 6.

    Variational regularization replaces the quadratic Tikhonov penalty with a task-specific functional R(x)R(x), yielding the MAP estimate under the prior p(x)eλR(x)p(x) \propto e^{-\lambda R(x)}. LASSO (R=1R = \|\cdot\|_1) promotes sparsity with exact recovery under RIP conditions. Total variation (R=TV()R = \mathrm{TV}(\cdot)) promotes piecewise-constant images with preserved edges. Group sparsity (R=2,1R = \|\cdot\|_{2,1}) enables joint support recovery in multi-frequency imaging.

  • 7.

    Nonlinear inverse problems are handled by the iteratively regularised Gauss–Newton method (IRGNM): linearise the forward operator at each step and apply Tikhonov regularisation to the linearised problem, with a decreasing sequence αn0\alpha_n \to 0. Convergence rates match the linear theory under source conditions. In RF imaging, the Born iterative method is the specific instantiation using the Lippmann–Schwinger equation.

Looking Ahead

Chapter 3 develops the Bayesian framework for inverse problems in depth: from MAP estimation (the connection to variational regularization established here) to full posterior distributions, credible regions, and uncertainty quantification. Chapter 3 also treats sparsity-promoting priors (Bernoulli–Gaussian, spike-and-slab, horseshoe) that go beyond the Laplace/Tikhonov priors of this chapter, and introduces Sparse Bayesian Learning (SBL) as a bridge to message-passing algorithms. Chapter 4 then provides the computational tools — fast operators, GPU acceleration, automatic differentiation — needed to make the regularization methods of Chapters 2–3 run on real-scale RF imaging problems.