Exercises

ex-ch02-01

Easy

Consider the forward operator A ⁣:R3R2\mathcal{A} \colon \mathbb{R}^3 \to \mathbb{R}^2 given by the matrix

A=(120013).\mathbf{A} = \begin{pmatrix} 1 & 2 & 0 \\ 0 & 1 & 3 \end{pmatrix}.

For each of Hadamard's three conditions (existence, uniqueness, stability), determine whether it holds for the equation Ax=y\mathbf{A}\mathbf{x} = \mathbf{y}.

ex-ch02-02

Easy

An operator has singular values σk=1/k\sigma_k = 1/k and left singular vectors uku_k. For which of the following data does the Picard condition hold?

(a) y,uk=1/k3\langle y, u_k \rangle = 1/k^3

(b) y,uk=1/k\langle y, u_k \rangle = 1/k

(c) y,uk=(1)k/k2\langle y, u_k \rangle = (-1)^k / k^2

ex-ch02-03

Easy

Compute the Tikhonov regularised solution xαx_\alpha for the 1D problem Ax=y\mathcal{A}x = y where A\mathcal{A} is the 2×22 \times 2 matrix

A=(3000.01),yδ=(3.10.5),\mathbf{A} = \begin{pmatrix} 3 & 0 \\ 0 & 0.01 \end{pmatrix}, \qquad \mathbf{y}^\delta = \begin{pmatrix} 3.1 \\ 0.5 \end{pmatrix},

for α=0.01\alpha = 0.01. Compare with the naive inverse A1yδ\mathbf{A}^{-1}\mathbf{y}^\delta and explain the difference.

ex-ch02-04

Medium

An operator has singular values σk=1/k\sigma_k = 1/k for k=1,,100k = 1, \ldots, 100. The noisy data has exact coefficients y,uk=1/k2\langle y, u_k \rangle = 1/k^2 and noise coefficients with η,uk=δ=0.01|\langle \eta, u_k \rangle| = \delta = 0.01 for all kk.

(a) Find the optimal truncation level KK^* for TSVD.

(b) Find the optimal Tikhonov parameter α\alpha^*.

(c) Compare the two minimum errors.

ex-ch02-05

Medium

For the discrete Tikhonov problem with SVD A=kσkukvkT\mathbf{A} = \sum_k \sigma_k \mathbf{u}_k \mathbf{v}_k^T, show that the discrepancy function φ(α)=Axαδyδ2\varphi(\alpha) = \|\mathbf{A}\mathbf{x}_\alpha^\delta - \mathbf{y}^\delta\|^2 is monotonically increasing in α\alpha.

Use this to prove that the equation φ(α)=τ2δ2\varphi(\alpha) = \tau^2\delta^2 has a unique solution α>0\alpha^* > 0 whenever τδ<yδ\tau\delta < \|\mathbf{y}^\delta\|.

ex-ch02-06

Medium

Prove that the Landweber filter function after nn iterations with step size ω\omega is

Fn(σ2)=1σ2[1(1ωσ2)n],F_n(\sigma^2) = \frac{1}{\sigma^2}\bigl[1 - (1 - \omega\sigma^2)^n\bigr],

and show that supσ>0σFn(σ2)nω\sup_{\sigma > 0} |\sigma F_n(\sigma^2)| \leq \sqrt{n\omega}.

ex-ch02-07

Medium

For the integral operator on L2([0,1])L^2([0,1]) with singular values σk=k2\sigma_k = k^{-2}, determine which of the following functions satisfy a source condition of order μ=1\mu = 1:

(a) x,vk=k3\langle x^\dagger, v_k\rangle = k^{-3}

(b) x,vk=k2\langle x^\dagger, v_k\rangle = k^{-2}

(c) x,vk=k5\langle x^\dagger, v_k\rangle = k^{-5}

ex-ch02-08

Medium

Show that the LASSO solution x^\hat{x} satisfies: if [A(Ax^y)]i<λ|[\mathcal{A}^*(A\hat{x} - y)]_i| < \lambda, then x^i=0\hat{x}_i = 0.

Use this to argue that the threshold λ=σn2logn\lambda = \sigma_n\sqrt{2\log n} (universal threshold) suppresses all noise-only components with high probability when AηN(0,σn2I)\mathcal{A}^*\eta \sim \mathcal{N}(0, \sigma_n^2 I).

ex-ch02-09

Hard

Derive the curvature formula for the L-curve. For Tikhonov regularization, let ρ(α)=logAxαyδ\rho(\alpha) = \log\|\mathbf{A}\mathbf{x}_\alpha - \mathbf{y}^\delta\| and η(α)=logxα\eta(\alpha) = \log\|\mathbf{x}_\alpha\|.

Compute ρ(α)\rho'(\alpha) and η(α)\eta'(\alpha) in terms of the SVD components, and write the curvature in closed form.

ex-ch02-10

Hard

Prove that Tikhonov regularization has qualification μ0=2\mu_0 = 2 by showing:

(a) For μ2\mu \leq 2, the optimal rate xαδx=O(δ2μ/(2μ+1))\|x_\alpha^\delta - x^\dagger\| = O(\delta^{2\mu/(2\mu+1)}) is achieved with α(δ/E)2/(2μ+1)\alpha^* \sim (\delta/E)^{2/(2\mu+1)}.

(b) For μ>2\mu > 2, the rate cannot exceed O(δ4/5)O(\delta^{4/5}) regardless of the choice of α\alpha.

ex-ch02-11

Hard

The SURE functional for Tikhonov regularization is

MSE^(α)=mσn2+Axαδyδ2+2σn2kσk2σk2+α.\widehat{\mathrm{MSE}}(\alpha) = -m\sigma_n^2 + \|\mathbf{A}\mathbf{x}_\alpha^\delta - \mathbf{y}^\delta\|^2 + 2\sigma_n^2 \sum_k \frac{\sigma_k^2}{\sigma_k^2 + \alpha}.

(a) Show that E[MSE^(α)]=EAxαδyδ+yδAx2\mathbb{E}[\widehat{\mathrm{MSE}}(\alpha)] = \mathbb{E}\|\mathbf{A}\mathbf{x}_\alpha^\delta - \mathbf{y}^\delta + \mathbf{y}^\delta - \mathcal{A}x^\dagger\|^2, i.e., SURE is an unbiased estimate of the prediction error.

(b) Find the α\alpha that minimises SURE for the simple case of a single component (m=n=1m = n = 1) with σ1=σ\sigma_1 = \sigma and yδ,u1=d\langle y^\delta, u_1\rangle = d.

ex-ch02-12

Hard

Consider the Born iterative method for 2D microwave tomography. The true permittivity contrast is χ(r)\chi^\dagger(\mathbf{r}). The forward operator at the nn-th iteration is

[F(xn)δχ](rrx)=k02ΩG0(rrx,r)En(r)δχ(r)dr.[\mathcal{F}'(x_n)\delta\chi](\mathbf{r}_{\text{rx}}) = k_0^2 \int_\Omega G_0(\mathbf{r}_{\text{rx}}, \mathbf{r}) E_n(\mathbf{r})\,\delta\chi(\mathbf{r})\,d\mathbf{r}.

(a) Write the adjoint (F(xn))w(\mathcal{F}'(x_n))^* w for wL2(Rrx2)w \in L^2(\mathbb{R}^2_{\text{rx}}).

(b) Show that the IRGNM normal equation at step nn is equivalent to a Tikhonov regularised linear system with matrix entries [(F(xn))F(xn)]ij[(\mathcal{F}'(x_n))^*\mathcal{F}'(x_n)]_{ij} that can be computed as integrals involving G0G_0 and EnE_n.

(c) Explain why the adjoint computation is cheaper than the forward computation by a factor of NrxN_{\text{rx}} (number of receivers).

ex-ch02-13

Challenge

Consider the linear inverse problem yδ=Ax+ηy^\delta = \mathcal{A}x + \eta with Gaussian noise ηN(0,σn2I)\eta \sim \mathcal{N}(0, \sigma_n^2 I) and Gaussian prior xN(x0,Cprior)x \sim \mathcal{N}(x_0, \mathcal{C}_{\text{prior}}) where the prior covariance has eigenfunctions {vk}\{v_k\} (aligned with SVD of A\mathcal{A}) and eigenvalues λkpr\lambda_k^{\text{pr}}.

(a) Derive the MAP estimate xMAPx_{\text{MAP}} and show it is a generalised Tikhonov solution with component-dependent regularisation parameter αk=σn2/λkpr\alpha_k = \sigma_n^2/\lambda_k^{\text{pr}}.

(b) The posterior covariance is Cpost=(σn2AA+Cprior1)1\mathcal{C}_{\text{post}} = (\sigma_n^{-2}\mathcal{A}^*\mathcal{A} + \mathcal{C}_{\text{prior}}^{-1})^{-1}. Compute its eigenvalues and interpret: which components of xx are data-determined and which are prior-determined?

(c) For a flat prior (λkpr\lambda_k^{\text{pr}} \to \infty for all kk), show that xMAPAyδx_{\text{MAP}} \to \mathcal{A}^\dagger y^\delta.

ex-ch02-14

Challenge

Consider the 2D total variation denoising problem (ROF model):

x^=argminx  12xy2+λTV(x).\hat{x} = \arg\min_x \;\frac{1}{2}\|x - y\|^2 + \lambda\,\mathrm{TV}(x).

For the 1D case with a single step edge at position kk (yi=0y_i = 0 for i<ki < k, yi=hy_i = h for iki \geq k):

(a) Show that the ROF solution has the form x^i=0\hat{x}_i = 0 for i<ki < k and x^i=(h2λ)+\hat{x}_i = (h - 2\lambda)^+ for iki \geq k (soft thresholding of the edge height).

(b) What is the minimum edge height hminh_{\min} that survives TV regularisation?

(c) Explain why this edge-preservation property makes TV superior to Tikhonov for piecewise-constant scenes in radar imaging.

ex-ch02-15

Challenge

The Inverse Crime. Consider a simulated microwave tomography experiment where you:

  1. Generate synthetic data y=Aχy = \mathcal{A}\chi^\dagger using a finite-element solver on a 100×100100 \times 100 grid.
  2. Add noise η\eta with δ=0.01\delta = 0.01.
  3. Reconstruct χ^\hat{\chi} using IRGNM with the same grid and solver.

(a) Explain why this simulation setup constitutes an "inverse crime" and why the resulting reconstruction error will be misleadingly small.

(b) Propose two ways to avoid the inverse crime while keeping computation manageable.

(c) For the corrected simulation, describe how the choice of inner discretisation (forward model) vs. outer discretisation (reconstruction grid) affects the regularization parameter α\alpha and the achievable reconstruction quality.