Exercises

ex-ch20-01

Easy

Write down the AMP iteration for soft-thresholding with threshold λt\lambda_t and δ=M/N\delta = M/N. Identify the Onsager term explicitly.

ex-ch20-02

Easy

Show that the derivative of the soft-threshold denoiser is the indicator of being above threshold: ηst(u;λ)=1{u>λ}\eta_{\mathrm{st}}'(u;\lambda) = \mathbf{1}\{|u|>\lambda\} almost everywhere. Where is ηst\eta_{\mathrm{st}} non-differentiable, and what does the SE analysis require about this?

ex-ch20-03

Easy

For a Bernoulli--Gaussian prior X(1ρ)δ0+ρN(0,1)X \sim (1-\rho)\delta_0 + \rho\,\mathcal{N}(0,1), write the state-evolution recursion for AMP with soft-thresholding.

ex-ch20-04

Easy

Verify that in the noiseless case σ2=0\sigma^2=0, the SE recursion for L1-AMP takes the form τt+12=δ1M(ρ,α)τt2\tau_{t+1}^2 = \delta^{-1} M(\rho,\alpha)\,\tau_t^2 when λt=ατt\lambda_t = \alpha \tau_t, for some function MM independent of τ\tau. Under what condition does τt0\tau_t \to 0?

ex-ch20-05

Medium

Derive the MMSE denoiser for a Bernoulli--Gaussian prior X(1ρ)δ0+ρN(0,σx2)X \sim (1-\rho)\delta_0 + \rho\,\mathcal{N}(0,\sigma_x^2) observed in Gaussian noise of variance τ2\tau^2. Show that ηmmse\eta_{\mathrm{mmse}} is smooth (in contrast to the soft-threshold) and compute its derivative.

ex-ch20-06

Medium

Prove the Stein identity: for a smooth bounded η\eta and U=X+τZU = X + \tau Z with ZN(0,1)Z \sim \mathcal{N}(0,1) independent of XX, Cov(η(U),UE[U])=τ2E[η(U)].\mathrm{Cov}(\eta(U), U - \mathbb{E}[U]) = \tau^2 \mathbb{E}[\eta'(U)]. (This is the identity that connects divergence of the MMSE denoiser to its posterior variance.)

ex-ch20-07

Medium

Implement AMP with soft-thresholding in 20 lines of NumPy. For (N,M)=(500,250)(N,M)=(500,250), a sparse Bernoulli--Gaussian signal with ρ=0.1\rho=0.1, σ2=104\sigma^2=10^{-4}, and λt=1.5τ^t\lambda_t = 1.5\hat{\tau}_t where τ^t2=rt2/M\hat{\tau}_t^2 = \|\mathbf{r}^t\|^2/M, run 20 iterations and report the normalised MSE.

ex-ch20-08

Medium

Explain the calibration equation λeff=λ(1δ1η)\lambda_{\mathrm{eff}} = \lambda_\star(1 - \delta^{-1}\langle\eta'\rangle_\star) linking L1-AMP's threshold to the effective LASSO regulariser. What does it mean for the effective penalty to be smaller than the AMP threshold?

ex-ch20-09

Medium

Let Ψ(τ2)=σ2+δ1E[(η(X+τZ;θ)X)2]\Psi(\tau^2) = \sigma^2 + \delta^{-1}\mathbb{E}[(\eta(X+\tau Z;\theta)-X)^2]. Show that Ψ\Psi is monotone increasing and continuous in τ2\tau^2. Conclude that the AMP recursion τt+12=Ψ(τt2)\tau_{t+1}^2 = \Psi(\tau_t^2) is monotone in tt.

ex-ch20-10

Medium

Construct a sensing matrix A\mathbf{A} that makes AMP diverge and provide a numerical demonstration. Use a sub-sampled DFT matrix with N=256N=256, M=128M=128.

ex-ch20-11

Medium

Compute (numerically) the Donoho--Tanner phase-transition curve ρ(δ)\rho^\star(\delta) by scanning the plane (δ,ρ)(0,1)2(\delta,\rho)\in(0,1)^2 and checking whether the L1-AMP SE recursion with optimal α\alpha converges to zero.

ex-ch20-12

Hard

Derive Taylor-expansion cancellation that gives the Onsager term. Concretely, show that without the Onsager term the conditional distribution of AHrt+x^t\mathbf{A}^{H}\mathbf{r}^t+\hat{\mathbf{x}}^t given the history has a bias proportional to rt1\mathbf{r}^{t-1}, and compute the proportionality constant.

ex-ch20-13

Hard

Show that damped AMP with parameter θ(0,1]\theta \in (0, 1] can be viewed as a single AMP step with an augmented denoiser. Use this to explain qualitatively why damping breaks state evolution (except in the limit θ0\theta \to 0).

ex-ch20-14

Hard

For a non-negative sparse signal with prior (1ρ)δ0+ρExp(1)(1-\rho)\delta_0 + \rho\cdot\mathrm{Exp}(1), design a denoiser that is Bayes-optimal and verify via simulation that the AMP MSE converges to the state-evolution fixed point predicted by this denoiser.

ex-ch20-15

Hard

Show that AMP with the identity denoiser η(u)=u\eta(u)=u reduces to pseudo-inverse iteration and is unstable for δ<1\delta < 1. Compute the state-evolution recursion for this degenerate case and identify when it has a fixed point.

ex-ch20-16

Challenge

Prove that, in the large-NN limit, running AMP on A\mathbf{A} with i.i.d.\ Gaussian entries achieves the replica-symmetric MMSE when η=ηmmse\eta=\eta_{\mathrm{mmse}} is matched to the prior. Contrast with LASSO's MSE computed from the same state-evolution machinery.

ex-ch20-17

Challenge

Design a D-AMP variant using a 3-layer neural denoiser trained on Bernoulli--Gaussian signal plus AWGN at various noise levels. Compare its MSE trajectory to L1-AMP and Bayes-AMP. Explain theoretically why learned denoisers can in principle match Bayes-AMP and why, in practice, they often fall short.