Exercises
ex01-lasso-implementation
Easy(a) Generate a 1D sparse signal with nonzero entries (random complex amplitudes). Create measurements with Gaussian and dB.
(b) Implement FISTA with complex soft-thresholding. Run for 200 iterations with step size .
(c) Plot the objective function vs. iteration and verify the rate.
(d) Implement the debiasing step: restrict to the detected support and solve least-squares. Compare the amplitude error before and after debiasing.
Compute as the largest singular value squared of via power iteration.
Complex soft-thresholding: .
FISTA convergence
The convergence curve should show . Plot on a log-log scale and verify the slope is approximately .
Debiasing effect
Debiasing typically reduces the amplitude RMSE by 30--50% because it removes the -dependent shrinkage.
ex02-lambda-selection
EasyUsing the setup from Exercise 1:
(a) Run FISTA for . For each, record the final NMSE and .
(b) Implement the discrepancy principle: find such that via bisection search.
(c) Implement 5-fold cross-validation on 20% held-out measurements.
(d) Compare the three methods (grid search, discrepancy, CV). Which yields the lowest NMSE?
For bisection, use log-scale search: .
Expected outcome
The discrepancy principle should yield NMSE within 1 dB of the optimal grid-search value because it directly matches the noise level. CV may be slightly worse due to the measurement splitting.
ex03-admm-lasso
Easy(a) Implement ADMM for the LASSO with splitting , penalty .
(b) Run for 100 iterations. Plot primal and dual residuals.
(c) Implement adaptive (residual balancing, , ). Compare convergence with fixed .
(d) Compare the final ADMM solution with FISTA. Verify they are identical up to numerical precision.
Precompute the Cholesky factorization of for the -update.
Adaptive $\rho$ advantage
Adaptive typically converges in 30--40% fewer iterations than fixed , because it balances the primal and dual convergence rates.
ex04-2d-imaging
Medium(a) Create a 2D MIMO radar imaging setup: , , at 10 GHz with 2 GHz bandwidth. Scene: grid, 8 point scatterers. dB.
(b) Compute the matched filter image .
(c) Run FISTA for 100 iterations. Compare MF and FISTA in NMSE, SSIM, and support F1-score.
(d) Apply debiasing. How much does it improve the amplitude RMSE?
(e) Vary from 5 to 40 dB. Plot NMSE vs. for MF, FISTA, and FISTA+debiasing.
Use Kronecker structure for fast matvec: .
Key result
FISTA should outperform MF by 10--15 dB in NMSE. Debiasing adds 1--3 dB improvement. The advantage grows with SNR because the LASSO can better separate signal from noise.
ex05-tv-imaging
Medium(a) Create a 2D scene () with 3 extended targets: a rectangle, a circle, and an L-shape. All piecewise-constant.
(b) Generate compressed measurements () using a random sensing matrix. dB.
(c) Reconstruct using: (i) (FISTA), (ii) isotropic TV (ADMM), (iii) anisotropic TV (ADMM).
(d) Compare NMSE, SSIM, and edge preservation.
(e) Add 5 point scatterers to the scene. Reconstruct with , TV, and + TV. Show that the combined penalty is superior for this mixed scene.
For TV ADMM, use CG with 10 inner iterations for the -update.
Expected ranking
For pure extended targets: TV > anisotropic TV > . For mixed scenes: + TV > TV > . Isotropic TV preserves diagonal edges better than anisotropic.
ex06-group-lasso
Medium(a) Create a radar imaging scene where each scatterer has a frequency-dependent reflectivity: with random . Grid: , 4 frequency bands, scatterers.
(b) Formulate as a group LASSO with groups = (pixel, all frequencies). Group size .
(c) Solve with: (i) standard LASSO (independent per frequency), (ii) group LASSO (FISTA with block soft-thresholding).
(d) Compare support recovery F1-score and amplitude RMSE.
(e) Reduce measurements per band to 50% of Nyquist. How much does group LASSO improve over standard LASSO?
Block soft-thresholding: .
Expected improvement
Group LASSO should improve support F1 by 15--25% at moderate SNR because it borrows strength across frequencies. The improvement is largest at the 50% compression level.
ex07-omp-vs-lasso
Medium(a) Generate 1D imaging problems with , , and varying sparsity . dB.
(b) For each , run: OMP (stopping at iterations), CoSaMP ( target sparsity), and LASSO (FISTA, optimal ).
(c) Plot NMSE vs. for all three methods. At what does each method start failing?
(d) Plot computation time vs. . At what is OMP faster than FISTA?
(e) Make the signal compressible (power-law decay with ). How do the results change?
OMP fails when exceeds the coherence limit; FISTA handles compressible signals gracefully.
Crossover point
OMP is typically faster for and fails for . FISTA is more robust and remains effective for compressible signals where is not well defined.
ex08-admm-composite
Hard(a) Implement ADMM for the composite problem: . Use two splitting variables.
(b) Create a scene with 5 point scatterers and 2 extended targets. , , dB.
(c) Sweep on a grid. Plot the 2D NMSE surface.
(d) Find the optimal . How does the ratio relate to the scene composition?
(e) Compare pure , pure TV, and the optimal combination.
The -update requires solving a linear system involving .
Optimal ratio
The optimal ratio is typically proportional to the ratio of point scatterers to extended target edge pixels. More point targets favor larger .
ex09-gridless-refinement
Hard(a) Generate a scene with 5 point scatterers at off-grid positions on a -point grid.
(b) Solve standard LASSO on the grid. Identify detected scatterers.
(c) Implement SPARROW-style refinement: parameterize and optimize by gradient descent on the data fidelity.
(d) Compare position error (RMSE) before and after refinement.
(e) Increase grid density to . At what density does refinement become unnecessary?
The position gradient involves .
Refinement benefit
SPARROW refinement typically reduces position RMSE by 5--10x for coarse grids. At (fine grid), the improvement is marginal (x).
ex10-mmv-radar
Hard(a) Simulate radar observing scatterers across snapshots. , . dB per snapshot. Swerling I amplitudes.
(b) For each , solve: (i) independent LASSO per snapshot then threshold the average support, (ii) MMV ().
(c) Plot support F1-score vs. for both methods.
(d) What is the minimum for 90% support detection?
(e) Compute the "snapshot gain" (dB equivalent of extra measurements from joint processing).
MMV primarily improves support detection; amplitude estimation benefits from averaging after support recovery.
Snapshot gain
MMV typically achieves the same support F1 with 2--4x fewer snapshots than independent LASSO. The equivalent SNR gain is -- dB.
ex11-pesavento
Hard(a) Implement the Pesavento iteratively reweighted algorithm (IR-) for the MMV problem.
(b) Compare with standard group LASSO on the multi-frequency imaging setup from Exercise 6.
(c) Run for 10 outer reweighting iterations. Plot the support F1-score vs. outer iteration.
(d) Does IR- converge to a sparser solution than standard group LASSO? Count the number of nonzero rows.
(e) Compare the computational cost per outer iteration with solving a single group LASSO from scratch.
Use the weight update with .
Expected behavior
IR- typically converges in 5--8 outer iterations and produces 20--30% fewer false positives than standard group LASSO, because the reweighting approximates .
ex12-tgv
Hard(a) Create a scene with a target whose reflectivity varies linearly (gradient from to across a region) surrounded by zero background.
(b) Reconstruct with: (i) isotropic TV (ADMM), (ii) TGV order 2 (ADMM with two auxiliary variables).
(c) Compare visually: does TV produce staircase artifacts? Does TGV recover the smooth gradient?
(d) Sweep the TGV parameters and find the optimal ratio .
(e) Add sharp edges to the scene (a rectangular cutout inside the gradient region). Does TGV preserve both the edges and the smooth gradient?
TGV ADMM requires two splitting variables: and .
Staircase effect
TV approximates the linear gradient with 5--10 constant steps (staircase). TGV with appropriate recovers the smooth gradient with NMSE improvement of 5--8 dB.
ex13-algorithm-comparison
HardBuild a systematic comparison of all algorithms on a standardized RF imaging benchmark:
(a) Setup: MIMO radar, , , (). Scene: (). Three sparsity levels: . dB.
(b) Algorithms: Matched filter, FISTA, ADMM, OMP, CoSaMP, IHT.
(c) Metrics: NMSE, support F1, computation time, iterations to dB NMSE.
(d) Create a comparison table. For each , rank by NMSE and by speed.
(e) Plot Pareto fronts (NMSE vs. time) for each . Which algorithms are Pareto-optimal?
Use warm-starting (initialize from matched filter) for faster convergence.
Expected Pareto front
For : OMP dominates the fast region, FISTA the accurate region. For : FISTA dominates everywhere; greedy methods fail. ADMM with TV is Pareto-optimal for mixed scenes.
ex14-atomic-norm
Hard(a) Implement the atomic norm SDP for a ULA with elements. Use a standard SDP solver (e.g., CVXPY with SCS).
(b) Generate point sources at off-grid angles. Solve the SDP and extract the source angles from the dual variable.
(c) Compare with grid-based LASSO on grids of points. Plot position error vs. grid density.
(d) Vary the source separation from to the Rayleigh limit. At what separation does the SDP fail?
(e) Compare computation time: SDP vs. LASSO vs. SPARROW.
The SDP variable is a PSD matrix with Toeplitz structure in the top-left block.
Super-resolution limit
The SDP typically resolves sources down to Rayleigh widths at dB. Below Rayleigh, the SDP merges the sources into a single peak.
ex15-full-pipeline
ChallengeBuild an end-to-end sparse RF imaging pipeline:
(a) System: MIMO radar at 77 GHz, , , 4 GHz bandwidth, . Scene: at 5 m range.
(b) Scene: Urban scenario β 10 vehicles (extended, 3--5 pixels each), 20 pedestrians (point-like), 5 buildings (large extended).
(c) Compression: Randomly subsample 30% of measurements.
(d) Pipeline: (i) Matched filter (fast preview). (ii) FISTA () for sparse reconstruction. (iii) ADMM ( + TV) for final image. (iv) SPARROW refinement for the strongest 10 scatterers.
(e) Evaluation: NMSE, SSIM, target detection ( and ), computation time per stage.
(f) Create a processing time vs. quality curve. What is the optimal stopping point at 100 ms? 500 ms? 5 s?
The pipeline should progressively refine: MF -> FISTA -> ADMM -> SPARROW.
Use Kronecker structure for fast computation throughout.
Expected pipeline performance
MF: 10 ms, NMSE dB. FISTA (30 iter): 150 ms, NMSE dB. ADMM (50 iter): 500 ms, NMSE dB. SPARROW: +50 ms, position RMSE pixel for strong targets.
Time budget allocation
At 100 ms budget: use FISTA with 20 iterations. At 500 ms: ADMM with + TV. At 5 s: full pipeline with SPARROW refinement.