Exercises

ch17-ex01

Easy

(AMP Implementation and the Onsager Effect)

(a) Implement AMP with soft thresholding for N=500N = 500, M=200M = 200, i.i.d. Gaussian A\mathbf{A}, Bernoulli-Gaussian reflectivity with ρ=0.1\rho = 0.1, SNR=25 dB\text{SNR} = 25\,\text{dB}.

(b) Plot NMSE vs. iteration for 50 iterations.

(c) Remove the Onsager correction and plot the resulting ISTA convergence alongside AMP.

(d) Quantify the NMSE gap at convergence.

ch17-ex02

Easy

(State Evolution Verification)

(a) Implement the AMP state evolution recursion Ο„t+1=Οƒ2+1Ξ΄E[∣ηt(C0+Ο„tZ)βˆ’C0∣2]\tau_{t+1} = \sigma^2 + \frac{1}{\delta} \mathbb{E}[|\eta_t(C_0 + \sqrt{\tau_t}Z) - C_0|^2] with numerical integration.

(b) Run AMP for N=2000N = 2000 and compare empirical MSE with SE.

(c) Repeat for N=100,500,5000N = 100, 500, 5000. At what NN does the empirical MSE match SE to within 5%?

ch17-ex03

Easy

(AMP Divergence on Structured Matrices)

(a) Generate three sensing matrices with M=400M = 400, N=1000N = 1000: (i) i.i.d. Gaussian, (ii) partial DCT, (iii) Kronecker A1βŠ—A2\mathbf{A}_{1} \otimes \mathbf{A}_{2} with Gaussian factors.

(b) Run AMP with soft thresholding on each. Plot NMSE vs. iteration.

(c) Which matrices cause divergence?

(d) Compare the eigenvalue distributions of AHA\mathbf{A}^{H}\mathbf{A} for each.

ch17-ex04

Easy

(Basic OAMP Implementation)

(a) Implement OAMP with soft thresholding for N=500N = 500, M=200M = 200, i.i.d. Gaussian A\mathbf{A}.

(b) Compare OAMP and AMP on this Gaussian matrix. Verify they achieve the same NMSE.

(c) Replace A\mathbf{A} with a partial DCT matrix. Run both algorithms. Verify AMP diverges while OAMP converges.

(d) Report the SVD computation time for the 200Γ—500200 \times 500 matrix.

ch17-ex05

Easy

(Verifying the Orthogonality Condition)

(a) Run OAMP on a partial DCT matrix with N=500N = 500, M=200M = 200. At each iteration, compute ρt=∣e1t He2tβˆ’1∣βˆ₯e1tβˆ₯βˆ₯e2tβˆ’1βˆ₯\rho^t = \frac{|\mathbf{e}_1^{t\,H}\mathbf{e}_2^{t-1}|} {\|\mathbf{e}_1^t\|\|\mathbf{e}_2^{t-1}\|}.

(b) Plot ρt\rho^t vs. iteration. Verify it stays near zero.

(c) Repeat for AMP (with and without Onsager) on the same matrix.

ch17-ex06

Medium

(OAMP State Evolution)

(a) Implement the two-step OAMP state evolution: v1t=F(v2tβˆ’1)v_1^t = F(v_2^{t-1}) (from singular values) and v2t=mmse(Ξ·t,v1t)v_2^t = \text{mmse}(\eta_t, v_1^t).

(b) Run OAMP with BG-MMSE denoiser for N=1000N = 1000, Ξ΄=0.4\delta = 0.4, ρ=0.1\rho = 0.1, SNR=25 dB\text{SNR} = 25\,\text{dB} on a Kronecker partial-DFT matrix.

(c) Compare empirical MSE with SE prediction at each iteration.

(d) Plot the SE fixed-point NMSE vs. δ\delta for δ∈[0.2,0.8]\delta \in [0.2, 0.8].

ch17-ex07

Medium

(BG-MMSE Denoiser Derivation and Implementation)

(a) Derive the Bayes-optimal denoiser for the Bernoulli-Gaussian prior p0(c)=(1βˆ’Ο)Ξ΄(c)+ρ CN(0,Οƒc2)p_0(c) = (1-\rho)\delta(c) + \rho\,\mathcal{CN}(0, \sigma_c^2).

(b) Implement the denoiser and verify on synthetic data.

(c) Compute the divergence analytically.

(d) Compare OAMP with BG-MMSE vs. soft thresholding at Ξ΄=0.3\delta = 0.3, ρ=0.1\rho = 0.1, SNR=25 dB\text{SNR} = 25\,\text{dB}.

(e) Repeat with Bernoulli-Rademacher prior.

ch17-ex08

Medium

(Kronecker LMMSE Speedup)

(a) Implement the naive OAMP LMMSE step (forming AHA\mathbf{A}^{H}\mathbf{A} and inverting).

(b) Implement the Kronecker-factored LMMSE step.

(c) Verify both produce the same output (up to numerical precision).

(d) Benchmark both for N1=N2∈{16,32,64,128}N_1 = N_2 \in \{16, 32, 64, 128\}. Plot wall-clock time vs. N=N1N2N = N_1 N_2.

(e) At what NN does the Kronecker version become 100x faster?

ch17-ex09

Medium

(Hutchinson Trace Estimator)

(a) Implement the Hutchinson estimator for the divergence of soft thresholding (where the true divergence is known).

(b) Plot estimation error vs. perturbation size hh for h∈[10βˆ’8,10βˆ’1]h \in [10^{-8}, 10^{-1}].

(c) For a black-box denoiser (use a small MLP), estimate the divergence with 1, 3, 5, 10 probe vectors. Plot the variance of the estimate vs. number of probes.

(d) Run OAMP with Hutchinson divergence estimation and compare with analytical divergence.

ch17-ex10

Medium

(Sparsity Mismatch Analysis)

(a) Generate a BG signal with ρ0=0.10\rho_0 = 0.10. Run OAMP with BG-MMSE denoiser assuming ρ~∈{0.01,0.05,0.10,0.15,0.20,0.30}\tilde{\rho} \in \{0.01, 0.05, 0.10, 0.15, 0.20, 0.30\}.

(b) Plot converged NMSE vs. ρ~\tilde{\rho}.

(c) Run mismatched state evolution and verify it predicts the empirical NMSE.

(d) Explain why underestimating ρ\rho is more harmful than overestimating.

ch17-ex11

Medium

(Complex-Valued OAMP)

(a) Extend the OAMP algorithm to complex-valued signals c∈CN\mathbf{c} \in \mathbb{C}^N and measurements.

(b) Derive the complex BG-MMSE denoiser for p0(c)=(1βˆ’Ο)Ξ΄(c)+ρ CN(0,Οƒc2)p_0(c) = (1-\rho)\delta(c) + \rho\,\mathcal{CN}(0, \sigma_c^2).

(c) Implement and test on a complex Kronecker sensing matrix with N=1024N = 1024, M=400M = 400.

(d) Compare with the real-valued case (same dimensions and SNR).

ch17-ex12

Medium

(Comparison of Denoisers in OAMP)

(a) Implement OAMP with three denoisers: soft thresholding, BG-MMSE, and a small DnCNN (3-layer CNN).

(b) Train the DnCNN on 1000 BG images at various noise levels.

(c) Run all three on the same Kronecker sensing problem with N=1024N = 1024, Ξ΄=0.4\delta = 0.4, ρ=0.1\rho = 0.1, SNR=25 dB\text{SNR} = 25\, \text{dB}.

(d) Plot NMSE vs. iteration for all three.

(e) Repeat with an extended-target scene (piecewise constant).

ch17-ex13

Hard

(Damped AMP vs OAMP)

(a) Implement damped AMP with damping parameter α∈{0.1,0.3,0.5,0.7,1.0}\alpha \in \{0.1, 0.3, 0.5, 0.7, 1.0\} on a Kronecker partial-DFT matrix with N=1024N = 1024, M=400M = 400.

(b) For each Ξ±\alpha, find the converged NMSE (if it converges).

(c) Compare the best damped-AMP result with OAMP.

(d) Explain why damping cannot match OAMP's performance even when it stabilizes convergence.

ch17-ex14

Hard

(OAMP Phase Transition)

(a) For OAMP with BG-MMSE denoiser and Kronecker partial-DFT A\mathbf{A}, compute the SE-predicted phase transition in the (δ,ρ)(\delta, \rho) plane.

(b) Define success as NMSE <βˆ’20 dB< -20\,\text{dB} after 50 SE iterations. Plot the boundary.

(c) Compare with the AMP phase transition for i.i.d. Gaussian A\mathbf{A} at the same (δ,ρ)(\delta, \rho) grid.

(d) How much does the Kronecker structure degrade the phase transition compared to i.i.d. Gaussian?

ch17-ex15

Hard

(Variance Mismatch Analysis)

(a) Generate a BG signal with Οƒc2=1.0\sigma_c^2 = 1.0. Run OAMP with BG-MMSE assuming Οƒ~c2∈{0.1,0.3,0.5,1.0,2.0,5.0,10.0}\tilde{\sigma}_c^2 \in \{0.1, 0.3, 0.5, 1.0, 2.0, 5.0, 10.0\}.

(b) Plot NMSE vs. Οƒ~c2\tilde{\sigma}_c^2.

(c) Compare the sensitivity to variance mismatch with sparsity mismatch (Exercise 10).

(d) For a scene with unknown statistics, propose a strategy that is robust to both types of mismatch.

ch17-ex16

Hard

(Near-Field Kronecker Approximation Error)

(a) For a near-field UPA geometry (d=5Ξ»d = 5\lambda, aperture =32Ξ»= 32\lambda), compute the exact sensing matrix A\mathbf{A} and its best Kronecker approximation A1βŠ—A2\mathbf{A}_{1} \otimes \mathbf{A}_{2}.

(b) Compute the relative approximation error βˆ₯Aβˆ’A1βŠ—A2βˆ₯F/βˆ₯Aβˆ₯F\|\mathbf{A} - \mathbf{A}_{1} \otimes \mathbf{A}_{2}\|_F / \|\mathbf{A}\|_F.

(c) Run OAMP with the exact SVD and with the Kronecker-approximate SVD. How much NMSE does the approximation cost?

(d) At what distance does the Kronecker approximation become acceptable (< 1 dB NMSE penalty)?

ch17-ex17

Hard

(OAMP with Learned Denoiser)

(a) Train a noise-level-conditional DnCNN on 5000 BG scenes with N=32Γ—32N = 32 \times 32 at noise levels Οƒβˆˆ[0.01,1.0]\sigma \in [0.01, 1.0].

(b) Plug the trained DnCNN into OAMP using the Hutchinson divergence estimator.

(c) Compare with BG-MMSE OAMP on: (i) BG scenes, (ii) piecewise- constant scenes, (iii) smooth Gaussian-process scenes.

(d) Plot NMSE vs. iteration for all three scene types.

(e) Discuss when the learned denoiser helps and when BG-MMSE is sufficient.

ch17-ex18

Challenge

(Large-Scale Kronecker OAMP with GPU Acceleration)

(a) Implement Kronecker-factored OAMP on GPU (PyTorch or CuPy) for N1=N2=128N_1 = N_2 = 128 (N=16384N = 16384).

(b) Benchmark against CPU implementation. Report speedup.

(c) Profile memory usage. What is the memory bottleneck?

(d) Run OAMP for 30 iterations at Ξ΄=0.4\delta = 0.4, SNR=25 dB\text{SNR} = 25\,\text{dB}, ρ=0.05\rho = 0.05. Report wall-clock time and final NMSE.

(e) Push to N1=N2=256N_1 = N_2 = 256 (N=65536N = 65536). Does it still fit in GPU memory?

ch17-ex19

Challenge

(EM-OAMP for Unknown Prior Parameters)

(a) Implement EM-OAMP: alternate OAMP iterations with M-step updates of ρ\rho and Οƒc2\sigma_c^2.

(b) Initialize with deliberately wrong parameters (ρ~=0.5\tilde{\rho} = 0.5, Οƒ~c2=10\tilde{\sigma}_c^2 = 10) and track the parameter estimates across EM iterations.

(c) Does EM-OAMP converge to the true parameters?

(d) Compare the final NMSE with oracle OAMP (true parameters known) and mismatched OAMP (fixed wrong parameters).

(e) Test robustness: repeat with non-BG scenes. Does EM-OAMP still find useful parameters?

ch17-ex20

Challenge

(Full RF Imaging Pipeline: From Measurements to Image)

(a) Set up a complete RF imaging scenario: 8-element ULA Tx, 8-element ULA Rx, 64 OFDM subcarriers, 2D scene with 32Γ—3232 \times 32 voxels.

(b) Construct the Kronecker-structured sensing matrix from the array geometry and subcarrier frequencies.

(c) Generate a realistic scene with 5 point scatterers and a reflecting wall. Compute noiseless and noisy measurements.

(d) Reconstruct using: (i) backpropagation, (ii) LASSO via FISTA, (iii) OAMP with BG-MMSE, (iv) OAMP with a simple trained denoiser.

(e) Compare all methods in NMSE, visual quality, and computation time. Which is best and why?