Chapter Summary

Chapter Summary

Key Points

  • 1.

    Algorithm unrolling converts TT iterations of an optimisation algorithm into a TT-layer neural network with learnable parameters, inheriting interpretability from the algorithm and adaptability from deep learning. The architecture encodes the forward model y=Ac+w\mathbf{y} = \mathbf{A}\mathbf{c} + \mathbf{w}, dramatically reducing the parameter count versus generic networks.

  • 2.

    Unrolled OAMP with ProxNet is the state of the art for RF imaging with structured sensing matrices. The Kronecker-LMMSE step exploits A=A1βŠ—A2\mathbf{A} = \mathbf{A}_1 \otimes \mathbf{A}_2 for O(Nlog⁑N)O(N\log N) per-layer cost, and the state evolution provides a noise schedule for the learned CNN denoiser. With ∼55\sim 55K parameters per layer, it achieves 3--6 dB over hand-tuned OAMP and outperforms pure U-Net with 20Γ—\times fewer parameters.

  • 3.

    LISTA unrolls ISTA with learnable matrices and thresholds, achieving orders of magnitude faster convergence. ALISTA shows that the essential learned quantity is the threshold schedule. However, LISTA's dense matrices make it prohibitive for imaging-scale problems (N>104N > 10^4), where OAMP's physics-based LMMSE step is far more efficient.

  • 4.

    Learned ADMM benefits from the splitting structure and the dual variable's cross-layer memory. Learned Primal-Dual adds dual-domain processing for measurement-space artefacts. Both are valid architectures, but for Kronecker-structured sensing matrices, unrolled OAMP provides the best match between algorithm structure and physics.

  • 5.

    Theoretical guarantees include: convergence under RIP (linear decay), generalisation bounds scaling as P/nβ‹…βˆkLk\sqrt{P/n} \cdot \prod_k L_k (improved by weight tying and inductive bias), robustness to model mismatch proportional to the perturbation norm, and convergent unrolling via Lipschitz constraints for safety-critical applications.

  • 6.

    Hierarchical soft-thresholding replaces scalar thresholds with spatially-varying threshold maps predicted by an auxiliary CNN. Applied to OTFS channel estimation (Dehkordi/Jung/Caire), it exploits the angular-delay-Doppler tree structure for 3--5 dB gains over scalar methods, while preserving the proximal operator interpretation and convergence guarantees.

Looking Ahead

This chapter established unrolled OAMP-ProxNet as the recommended learned reconstruction method for RF imaging with structured sensing operators. Chapter 19 extends the message-passing framework to non-Gaussian likelihoods (GAMP) and joint hyperparameter estimation (EM-GAMP). The Plug-and-Play framework, diffusion-based priors, and deep equilibrium models in later chapters offer alternative approaches to combining physics and learning, but unrolled OAMP-ProxNet remains the baseline against which all are compared.