Prerequisites & Notation
Before You Begin
This chapter assumes familiarity with linear estimation (least squares, Tikhonov/ridge regularization) and basic convex analysis. Compressed sensing stands at the intersection of approximation theory, high-dimensional probability, and convex optimization; we will draw on all three.
- Linear least squares and the normal equations(Review ch05)
Self-check: Can you derive and state when is invertible?
- Singular value decomposition and condition numbers(Review ch06)
Self-check: Can you bound in terms of the singular values of ?
- Tikhonov (ridge) regularization(Review ch09)
Self-check: Can you explain why ridge regression shrinks coefficients but does not set them to zero?
- Basic convex optimization: convex sets, convex functions, KKT conditions
Self-check: Can you state the KKT conditions for a constrained minimization with linear equality constraints?
- Gaussian concentration (Hoeffding, Bernstein) and random matrix basics
Self-check: Can you state the concentration of around ?
Notation for This Chapter
Symbols introduced in this chapter. In all that follows, denotes the sensing (measurement) matrix with . Vectors are column vectors; denotes the support of a sparse vector.
| Symbol | Meaning | Introduced |
|---|---|---|
| Ambient dimension (number of unknowns) | s01 | |
| Number of measurements, | s01 | |
| Sparsity level: number of nonzero entries in | s01 | |
| Support set: | s01 | |
| Number of nonzero entries (not a true norm) | s01 | |
| Sensing / measurement matrix of size | s01 | |
| Noise vector, typically | s01 | |
| Restricted isometry constant of order | s03 | |
| Mutual coherence of | s03 | |
| Regularization parameter for LASSO / BPDN | s02 | |
| LASSO solution (sparse estimator) | s02 | |
| Best -term approximation error in : | s04 |