Exercises
ex01-verify-norm
EasyVerify the triangle inequality for the norm, using the Cauchy–Schwarz inequality .
Expand .
Use from Cauchy–Schwarz.
Recognize the right-hand side as and take square roots.
Expand the squared norm
$
Apply Cauchy–Schwarz
Since , the Cauchy–Schwarz inequality gives Substituting:
Take square roots
Both sides are non-negative, so taking square roots preserves the inequality:
ex02-lp-norms
EasyLet on . Compute the three norms:
Use the symmetry to write each integral as .
For , recall this is . Where does attain its maximum?
For , compute .
$L^1$ norm
Since on :
$L^2$ norm
|f|_2 = \sqrt{2/3}$.
$L^\infty$ norm
The function is continuous on with maximum at :
Verify the ordering
We have , , and . On a bounded domain, is generally a decreasing function of (up to normalization by the domain measure), consistent with the general inclusion for .
ex03-adjoint-matrix
EasyShow that the adjoint of , viewed as a bounded linear map from to , is the conjugate transpose .
That is, verify that for all and .
Write the inner product as .
Use the property .
Expand the left-hand side
Using the standard inner product on :
Rewrite using the conjugate transpose
Note that , so
Conclude
Since the identity holds for all and , the adjoint of is .
ex04-projection
EasyFind the orthogonal projection of onto in .
The projection is where is an orthonormal basis for .
Check: are and already orthogonal in ? Compute .
Compute and (integrate by parts).
Verify orthogonality of the basis
{1, \cos(2\pi x)}|1| = 1|\cos(2\pi x)| = 1/\sqrt{2}{e_1, e_2} = {1,, \sqrt{2}\cos(2\pi x)}$.
Compute the Fourier coefficients
\int_0^1 x\cos(2\pi x),dx = \left[\frac{x\sin(2\pi x)}{2\pi}\right]_0^1 - \int_0^1 \frac{\sin(2\pi x)}{2\pi},dx = 0 + \frac{\cos(2\pi x)}{4\pi^2}\Big|_0^1 = 0.\langle f, e_2 \rangle = 0$.
Write the projection
f(x) = xV = \operatorname{span}{1, \cos(2\pi x)}1/2$.
Verify orthogonality of the residual
The residual is . We check: (the second follows from the computation above). The residual is orthogonal to , confirming our projection.
ex05-completeness
MediumProve that — the space of square-summable sequences with — is a complete normed space (i.e., a Banach space).
Let be a Cauchy sequence in . Show that for each fixed , the scalar sequence is Cauchy in .
Define the candidate limit where . You must show .
Use the Cauchy property: for , there exists such that for all and all . Let to get .
Component-wise convergence
Let be Cauchy in . For each fixed : So is Cauchy in and converges to some . Define .
Show $\mathbf{x}^{(k)} \to \mathbf{x}^*$ in $\ell^2$
Given , choose such that for all . For any finite : Letting (limits commute with finite sums): Now let : , so .
Show $\mathbf{x}^* \in \ell^2$
By the triangle inequality: Therefore , and in norm.
ex06-operator-norm
MediumCompute the operator norm of the functional defined by
The operator norm is . Use Cauchy–Schwarz to find an upper bound.
By Cauchy–Schwarz, . What is ?
Show the bound is achieved by .
Identify $\mathcal{A}$ as an inner product
We can write where is the constant function.
Upper bound via Cauchy–Schwarz
By Cauchy–Schwarz: Therefore .
Achieve the bound
Take , which has . Then The supremum is achieved, so .
General principle
More generally, for the Riesz functional , the Riesz representation theorem guarantees , achieved by . Our case is .
ex07-compact-diagonal
MediumLet be an orthonormal basis of a separable Hilbert space, and define the diagonal operator by where is a bounded sequence.
Show that is compact if and only if as .
For the forward direction (): if is compact, consider the bounded sequence and use the definition of compactness.
For the reverse direction (): define finite-rank operators for and for . Show .
Recall that the limit of finite-rank operators in operator norm is compact.
Forward direction: compact $\Rightarrow$ $d_n \to 0$
The sequence is bounded () but has no convergent subsequence (since for ).
If is compact, must have a convergent subsequence. But for . For a subsequence to converge, we need .
If , there exists and a subsequence with . Then no sub-subsequence of can converge, contradicting compactness.
Reverse direction: $d_n \to 0$ $\Rightarrow$ compact
Define by for and for . Each has finite rank , hence is compact.
Convergence in operator norm
For any with : Therefore since .
Conclude
Since is the operator-norm limit of finite-rank operators, and the compact operators form a closed subspace of the bounded operators, is compact.
ex08-eigenvalues-convolution
MediumConsider the periodic convolution operator on : where is a periodic kernel. Show that the complex exponentials are eigenfunctions of , and find the corresponding eigenvalues in terms of the Fourier coefficients of the kernel.
Substitute and use the substitution to convert the integral to a standard Fourier coefficient.
The eigenvalue is the -th Fourier coefficient of : .
This is the infinite-dimensional analog of the fact that the DFT diagonalizes circulant matrices.
Apply $\mathcal{A}$ to $e_n$
$
Identify the eigenvalue
\hat{K}(n) = \int_0^1 K(t) e^{-2\pi i n t},dtnKe_n\lambda_n = \hat{K}(n)$.
Spectral characterization of $\mathcal{A}$
Since forms an orthonormal basis for , the operator is diagonalized by the Fourier basis: The operator is compact if and only if as (by Exercise 7).
Finite-dimensional analog
This is the continuous analog of the discrete result: circulant matrices are diagonalized by the DFT matrix. If is an circulant matrix with first column , then where is the DFT matrix.
ex09-picard
MediumAn operator has singular values and singular functions . Given data with for all :
- Verify that the Picard condition is satisfied.
- Compute where is the generalized inverse solution.
- What happens if the data coefficients decay as instead?
The Picard condition requires .
Compute the ratios: .
Recall that converges if and only if .
Check the Picard condition
The Picard condition requires: Substituting and : The Picard condition is satisfied.
Compute $\|g^\dagger\|^2$
Since the are orthonormal:
The borderline case
If instead, the Picard sum becomes: The Picard condition fails — the data coefficients decay at the same rate as the singular values, so the generalized inverse solution has infinite norm. This is precisely the ill-posedness that regularization must address.
ex10-lp-inclusions
HardLet be a bounded domain with finite measure . Prove that for , with the embedding constant
Apply Hölder's inequality with exponents and to .
For the case , the bound simplifies to , which is immediate.
The embedding constant is sharp: equality holds for constant.
Set up Hölder's inequality
For , let and . Apply Hölder's inequality to :
Simplify
Since : Taking the -th root:
Case $q = \infty$
|f|p \leq |\Omega|^{1/p},|f|\infty1/q = 0$.
Sharpness
For (constant), and , so achieving equality. The embedding constant is sharp.
ex11-svd-rank-one
HardLet and be unit vectors, and define the rank-one operator by
- Find the adjoint .
- Compute and .
- Derive the SVD of : find its singular value(s), left singular vector(s), and right singular vector(s).
- Generalize: what is the SVD of for ?
For the adjoint, find such that for all .
Substitute: .
The rank-one operator has exactly one non-zero singular value.
Find the adjoint
For all and : Therefore .
Compute $\mathcal{A}^*\mathcal{A}$ and $\mathcal{A}\mathcal{A}^*$
|u| = 1\mathcal{A}^\mathcal{A}\operatorname{span}{v}\mathcal{A}\mathcal{A}^ g = \langle g,u\rangle u\operatorname{span}{u}$.
Read off the SVD
Since , the eigenvalue is , so .
- Right singular vector:
- Left singular vector:
- SVD:
where .
Generalization
For , the same analysis gives , so the singular value is , with the same singular vectors. The SVD is .
This is the operator-theoretic analog of the matrix SVD for a rank-one matrix .
ex12-sobolev-embedding
HardProve the 1D Sobolev embedding theorem: , with the bound where .
Here is the Sobolev space of functions with weak derivatives also in .
Use the fundamental theorem of calculus: for a suitable point .
Choose as the mean value point: , so that by Cauchy–Schwarz.
Apply Cauchy–Schwarz to the integral: .
Representation via the fundamental theorem
For smooth (the general case follows by density), the mean value theorem for integrals gives a point such that . For any :
Bound each term
First term: By Cauchy–Schwarz on :
Second term: By Cauchy–Schwarz:
Combine
(a + b)^2 \leq 2(a^2 + b^2)x|f|\infty \leq \sqrt{2},|f|{H^1}$.
Continuity
The representation shows is absolutely continuous (hence continuous), since on bounded domains. Therefore every function has a continuous representative, establishing the embedding .
Remark: This embedding is specific to 1D. In dimensions, requires , i.e., only.
ex13-hilbert-schmidt-norm
HardLet be an integral operator with kernel : Show that:
- The Hilbert–Schmidt norm satisfies .
- If are the singular values of , then .
The Hilbert–Schmidt norm is defined as for any orthonormal basis .
For part 1, expand and use Parseval's theorem.
For part 2, use the SVD and compute .
Expand the Hilbert–Schmidt norm
Let be any orthonormal basis of . By definition:
Apply Parseval's theorem
For fixed , the function lies in . Its Fourier coefficients in the basis are . By Parseval:
Note: since we can absorb the conjugate into the kernel. Integrating over :
Relate to singular values
Now choose the orthonormal basis to be the right singular vectors (extended to a full ONB of ). Then for the singular vectors, and for basis elements orthogonal to all . Therefore:
Physical interpretation
The identity connects the abstract spectral decomposition to the physical kernel. For imaging, if is the Green's function, measures the total energy coupling between the scene and measurement domains. The finiteness of this quantity () is precisely what makes Hilbert–Schmidt, hence compact.
ex14-ill-posedness-proof
ChallengeLet be a compact operator between Hilbert spaces with infinite-dimensional range. Prove that (defined on the range of ) is unbounded.
This establishes that every compact operator with infinite-dimensional range defines an ill-posed inverse problem.
Use the SVD: . Since the range is infinite-dimensional, there are infinitely many non-zero singular values.
Compactness implies . Construct a sequence in the range whose pre-images have unbounded norm.
Consider . Then , and .
SVD and singular value decay
By the SVD for compact operators: where (infinitely many since the range is infinite-dimensional) and (necessary for compactness).
Construct the unbounded sequence
Consider the sequence in the range of . Each is a unit vector (), and since .
Show unboundedness
\sigma_n \to 0{u_n}|u_n| = 1{\sigma_n^{-1} v_n}$.
Connection to Hadamard
This proves that the inverse problem is ill-posed in Hadamard's sense — specifically, the solution does not depend continuously on the data. Since :
- Small perturbations in the data direction produce perturbations in the solution, with amplification factor .
- The noise amplification at frequency is , which is precisely what the Picard condition and regularization theory address.
This is the fundamental result motivating all of Chapter 2.
ex15-imaging-discretization
ChallengeConsider a 1D imaging problem: a scene on is observed by receivers at positions via the forward model where is a Gaussian kernel with width .
- Discretize the scene on uniform grid points and construct the sensing matrix with entries .
- Compute the SVD of and analyze how the singular values decay as increases (with fixed).
- Show that the condition number grows without bound as , and relate this to the compactness of the continuous operator.
The matrix entries approximate the integral: via a Riemann sum with spacing .
For the Gaussian kernel, the singular values of the continuous operator decay super-exponentially. The discrete matrix inherits this decay up to the discretization rank.
As with fixed, the number of significant singular values is bounded by , but numerical singular values below machine precision appear, driving .
Construct the sensing matrix
Place receivers uniformly: for . The scene grid is for . The sensing matrix has entries:
This is a Riemann-sum approximation: .
SVD analysis
The matrix has at most non-zero singular values. For the Gaussian kernel:
- The continuous operator has singular values decaying as (super-exponential decay related to the analyticity of ).
- The matrix singular values approximate the continuous ones for , but additional spurious small singular values appear for the extra degrees of freedom when .
Condition number growth
The largest singular value stabilizes as increases (it converges to the operator norm ). However, the smallest singular value continues to decrease:
- For : approximates the -th singular value of the continuous operator, which decays super-exponentially.
- For : has rank , so for (exactly). In practice with finite precision, these are .
Either way, as .
Connection to continuous ill-posedness
The growing condition number is the discrete manifestation of the continuous operator's compactness:
- Compact operator the inverse is unbounded (Exercise 14).
- Discrete approximation: as increases, the matrix better approximates the operator, inheriting more of the decaying singular values, which drives .
This demonstrates why simply increasing the discretization resolution does not improve reconstruction — it makes the linear system more ill-conditioned. Regularization (Chapter 2) is essential regardless of discretization level.
Numerical illustration
For , , and varying :
- :
- :
- : (near machine precision)
- : is numerically rank-deficient
The singular value curve of converges to that of the continuous operator as , confirming that discretization preserves the essential ill-posedness.
ex16-singular-system-rank2
HardLet and be orthonormal sets, and define the rank-two operator by
- Find the adjoint .
- Compute explicitly and find its eigenfunctions and eigenvalues.
- Write down the complete singular system .
- Verify the reconstruction formula for each .
- Express the pseudoinverse using the singular system.
The adjoint of is (use the sesquilinear definition).
Linearity: .
.
The pseudoinverse is .
Find the adjoint
For all and : Therefore .
Compute $\mathcal{A}^*\mathcal{A}$
\mathcal{A}^*\mathcal{A}91v_1v_2$.
The complete singular system
The singular values are and . The singular system is: Right singular functions are the "scene modes"; left singular functions are the data modes.
Verify the reconstruction formula
$
The pseudoinverse
For data : For outside the range of , returns the minimum-norm least-squares solution.
ex17-picard-convergence
HardLet be a compact operator with singular system . Suppose the Picard condition holds:
- Show that the partial sums form a Cauchy sequence in .
- Conclude that exists in and satisfies , where is the orthogonal projection onto the closure of the range of .
- Show that the Picard condition is also necessary: if exists with , then the Picard condition holds for .
For Cauchy: compute for .
Since the Picard sum converges, its tails go to zero: the sequence is Cauchy.
For the image: .
For necessity: if , then , so .
The partial sums are Cauchy
For , since are orthonormal: Since the Picard series converges, its tail as . Therefore is Cauchy.
The limit exists and satisfies the equation
Since is a Hilbert space (hence complete), exists with .
Applying (which is continuous, hence commutes with limits): If , then exactly.
Necessity of the Picard condition
Suppose satisfies . Using the SVD: Therefore , and by Bessel's inequality: The Picard condition holds.
ex18-noise-picard-violation
ChallengeLet be a compact operator with singular values and data , where satisfies the Picard condition and is white noise: are i.i.d. with for all .
- Show that the noisy data almost surely violates the Picard condition.
- Define the critical index as the largest such that . Show that is finite (Picard satisfied) but the noise contribution to the Picard sum beyond diverges almost surely.
- Explain why truncating the SVD at — the truncated SVD (TSVD) estimator — is a natural regularization.
- For singular values (), find explicitly and show that the reconstruction bandwidth decreases as noise increases.
White noise: if .
By the Borel–Cantelli lemma (or direct argument), infinitely many terms exceed any fixed threshold.
TSVD: . Only use data modes where signal exceeds noise.
For : gives .
White noise violates the Picard condition
The Picard sum for the noise component is: Since the noise coefficients are i.i.d. with variance , by the strong law of large numbers the partial sums grow linearly: since implies , and hence diverges for any compact operator with infinitely many singular values. The Picard condition fails almost surely.
The critical index $n^*$
Split the Picard sum into signal and noise:
For (where ): the noise contribution satisfies , so the first terms have bounded expected contribution.
For (where ): , and the sum diverges (each term has expectation exceeding 1).
TSVD as natural regularization
Define the TSVD estimator: where is the last stable mode.
This is natural for two reasons:
- Signal dominates noise for : the signal-to-noise ratio per mode is (for exact data on the range of ).
- Noise dominates for : including these modes amplifies noise by , degrading the reconstruction.
TSVD is the hard-thresholding analog of Tikhonov regularization (Chapter 2).
Explicit computation for $\sigma_n = n^{-\alpha}$
Setting :
The reconstruction bandwidth decreases as noise increases (fewer modes are stable), and increases as the decay is slower (larger means faster decay, fewer stable modes).
For example, with (mildly ill-posed):
- : modes
- : modes
- : modes
This quantifies the fundamental resolution–noise tradeoff in RF imaging: lower noise enables finer-scale reconstruction.