Chapter Summary
Chapter 13 Summary
Key Points
- 1.
The sparse recovery problem is underdetermined but not hopeless. With , , , the linear system has infinitely many solutions. A sparsity prior () restores identifiability: when , the -sparse solution is unique.
- 2.
is NP-hard; is its convex hull. Natarajan (1995) proved the computational hardness of -minimization. The relaxation replaces the combinatorial penalty by its tightest convex surrogate. Geometrically, the unit ball is a cross-polytope whose vertices coincide with -sparse signals β minimizing on an affine subspace is most likely to land on a vertex, hence on a sparse solution.
- 3.
Basis Pursuit and LASSO are equivalent for suitable parameters. BPDN minimizes s.t.\ ; LASSO minimizes . They trace the same solution path as and vary. Both are convex, hence solvable globally and efficiently by modern algorithms (Ch 14).
- 4.
The RIP is the right stability condition. satisfies RIP of order with constant when for every -sparse . If , BPDN recovers every -sparse stably. Verifying RIP is NP-hard, but random Gaussian/Bernoulli matrices satisfy it with overwhelming probability when .
- 5.
Coherence is a cheap surrogate for RIP. The mutual coherence can be computed in . The Welch bound gives . Recovery holds when , a weaker but easier-to-check guarantee than RIP.
- 6.
Recovery bounds degrade gracefully under noise. Noiseless RIP gives exact recovery; with noise , BPDN returns with . The first term is unavoidable noise propagation; the second penalises non-exact sparsity ( best -term approximation error). Oracle inequalities for LASSO show .
Looking Ahead
Chapter 14 develops the algorithms that actually solve BPDN / LASSO at scale: ISTA, FISTA, ADMM, OMP, and Bayesian alternatives. Chapter 15 applies this machinery to wireless problems β sparse channel estimation, massive activity detection, and DOA estimation β where compressed sensing has become the workhorse of CommIT research.